00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3688 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3289 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.109 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.216 > git --version # 'git version 2.39.2' 00:00:00.216 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.232 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.232 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.504 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.513 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.524 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:04.524 > git config core.sparsecheckout # timeout=10 00:00:04.533 > git read-tree -mu HEAD # timeout=10 00:00:04.548 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:04.562 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:04.562 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:04.641 [Pipeline] Start of Pipeline 00:00:04.654 [Pipeline] library 00:00:04.655 Loading library shm_lib@master 00:00:04.655 Library shm_lib@master is cached. Copying from home. 00:00:04.669 [Pipeline] node 00:00:04.682 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.684 [Pipeline] { 00:00:04.692 [Pipeline] catchError 00:00:04.693 [Pipeline] { 00:00:04.705 [Pipeline] wrap 00:00:04.714 [Pipeline] { 00:00:04.722 [Pipeline] stage 00:00:04.723 [Pipeline] { (Prologue) 00:00:04.935 [Pipeline] sh 00:00:05.214 + logger -p user.info -t JENKINS-CI 00:00:05.232 [Pipeline] echo 00:00:05.234 Node: GP11 00:00:05.240 [Pipeline] sh 00:00:05.579 [Pipeline] setCustomBuildProperty 00:00:05.589 [Pipeline] echo 00:00:05.590 Cleanup processes 00:00:05.594 [Pipeline] sh 00:00:05.873 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.873 1498866 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.885 [Pipeline] sh 00:00:06.167 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.167 ++ grep -v 'sudo pgrep' 00:00:06.167 ++ awk '{print $1}' 00:00:06.167 + sudo kill -9 00:00:06.167 + true 00:00:06.184 [Pipeline] cleanWs 00:00:06.194 [WS-CLEANUP] Deleting project workspace... 00:00:06.194 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.200 [WS-CLEANUP] done 00:00:06.206 [Pipeline] setCustomBuildProperty 00:00:06.228 [Pipeline] sh 00:00:06.512 + sudo git config --global --replace-all safe.directory '*' 00:00:06.611 [Pipeline] httpRequest 00:00:06.647 [Pipeline] echo 00:00:06.649 Sorcerer 10.211.164.101 is alive 00:00:06.659 [Pipeline] httpRequest 00:00:06.664 HttpMethod: GET 00:00:06.664 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.665 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.683 Response Code: HTTP/1.1 200 OK 00:00:06.683 Success: Status code 200 is in the accepted range: 200,404 00:00:06.684 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:10.995 [Pipeline] sh 00:00:11.283 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:11.304 [Pipeline] httpRequest 00:00:11.337 [Pipeline] echo 00:00:11.339 Sorcerer 10.211.164.101 is alive 00:00:11.350 [Pipeline] httpRequest 00:00:11.355 HttpMethod: GET 00:00:11.356 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:11.357 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:11.375 Response Code: HTTP/1.1 200 OK 00:00:11.375 Success: Status code 200 is in the accepted range: 200,404 00:00:11.376 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:15.166 [Pipeline] sh 00:01:15.476 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:18.027 [Pipeline] sh 00:01:18.313 + git -C spdk log --oneline -n5 00:01:18.313 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:18.313 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:18.313 3731556bd lvol: declare g_lvol_if static 00:01:18.313 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:18.313 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:18.335 [Pipeline] withCredentials 00:01:18.347 > git --version # timeout=10 00:01:18.359 > git --version # 'git version 2.39.2' 00:01:18.378 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:18.381 [Pipeline] { 00:01:18.393 [Pipeline] retry 00:01:18.395 [Pipeline] { 00:01:18.413 [Pipeline] sh 00:01:18.698 + git ls-remote http://dpdk.org/git/dpdk main 00:01:18.711 [Pipeline] } 00:01:18.736 [Pipeline] // retry 00:01:18.741 [Pipeline] } 00:01:18.761 [Pipeline] // withCredentials 00:01:18.771 [Pipeline] httpRequest 00:01:18.801 [Pipeline] echo 00:01:18.803 Sorcerer 10.211.164.101 is alive 00:01:18.812 [Pipeline] httpRequest 00:01:18.817 HttpMethod: GET 00:01:18.818 URL: http://10.211.164.101/packages/dpdk_90ec9b0db5c7bf7f911cb5ebcd8dfd15eb69c7dd.tar.gz 00:01:18.819 Sending request to url: http://10.211.164.101/packages/dpdk_90ec9b0db5c7bf7f911cb5ebcd8dfd15eb69c7dd.tar.gz 00:01:18.827 Response Code: HTTP/1.1 200 OK 00:01:18.827 Success: Status code 200 is in the accepted range: 200,404 00:01:18.828 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_90ec9b0db5c7bf7f911cb5ebcd8dfd15eb69c7dd.tar.gz 00:01:25.878 [Pipeline] sh 00:01:26.168 + tar --no-same-owner -xf dpdk_90ec9b0db5c7bf7f911cb5ebcd8dfd15eb69c7dd.tar.gz 00:01:28.085 [Pipeline] sh 00:01:28.371 + git -C dpdk log --oneline -n5 00:01:28.371 90ec9b0db5 net/mlx5: replenish MPRQ buffers for miniCQEs 00:01:28.371 3f11694354 net/mlx5: fix RSS and queue action validation 00:01:28.371 e6dfb25012 net/mlx5: fix action configuration validation 00:01:28.371 cf9a91c67b net/mlx5: fix disabling E-Switch default flow rules 00:01:28.371 463e5abe09 common/mlx5: remove unneeded field when modify RQ table 00:01:28.383 [Pipeline] } 00:01:28.401 [Pipeline] // stage 00:01:28.410 [Pipeline] stage 00:01:28.412 [Pipeline] { (Prepare) 00:01:28.455 [Pipeline] writeFile 00:01:28.475 [Pipeline] sh 00:01:28.763 + logger -p user.info -t JENKINS-CI 00:01:28.778 [Pipeline] sh 00:01:29.080 + logger -p user.info -t JENKINS-CI 00:01:29.093 [Pipeline] sh 00:01:29.379 + cat autorun-spdk.conf 00:01:29.379 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.379 SPDK_TEST_NVMF=1 00:01:29.379 SPDK_TEST_NVME_CLI=1 00:01:29.379 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.379 SPDK_TEST_NVMF_NICS=e810 00:01:29.379 SPDK_TEST_VFIOUSER=1 00:01:29.379 SPDK_RUN_UBSAN=1 00:01:29.379 NET_TYPE=phy 00:01:29.379 SPDK_TEST_NATIVE_DPDK=main 00:01:29.379 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.387 RUN_NIGHTLY=1 00:01:29.392 [Pipeline] readFile 00:01:29.419 [Pipeline] withEnv 00:01:29.421 [Pipeline] { 00:01:29.439 [Pipeline] sh 00:01:29.728 + set -ex 00:01:29.728 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:29.728 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.728 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.728 ++ SPDK_TEST_NVMF=1 00:01:29.728 ++ SPDK_TEST_NVME_CLI=1 00:01:29.728 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.728 ++ SPDK_TEST_NVMF_NICS=e810 00:01:29.728 ++ SPDK_TEST_VFIOUSER=1 00:01:29.728 ++ SPDK_RUN_UBSAN=1 00:01:29.728 ++ NET_TYPE=phy 00:01:29.728 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:29.728 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.728 ++ RUN_NIGHTLY=1 00:01:29.728 + case $SPDK_TEST_NVMF_NICS in 00:01:29.728 + DRIVERS=ice 00:01:29.728 + [[ tcp == \r\d\m\a ]] 00:01:29.728 + [[ -n ice ]] 00:01:29.728 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:29.728 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:29.728 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:29.728 rmmod: ERROR: Module irdma is not currently loaded 00:01:29.729 rmmod: ERROR: Module i40iw is not currently loaded 00:01:29.729 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:29.729 + true 00:01:29.729 + for D in $DRIVERS 00:01:29.729 + sudo modprobe ice 00:01:29.729 + exit 0 00:01:29.739 [Pipeline] } 00:01:29.755 [Pipeline] // withEnv 00:01:29.761 [Pipeline] } 00:01:29.784 [Pipeline] // stage 00:01:29.795 [Pipeline] catchError 00:01:29.797 [Pipeline] { 00:01:29.812 [Pipeline] timeout 00:01:29.813 Timeout set to expire in 50 min 00:01:29.815 [Pipeline] { 00:01:29.830 [Pipeline] stage 00:01:29.832 [Pipeline] { (Tests) 00:01:29.849 [Pipeline] sh 00:01:30.146 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.146 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.146 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.146 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:30.146 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.146 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.146 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:30.146 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.146 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.146 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.146 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:30.146 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.147 + source /etc/os-release 00:01:30.147 ++ NAME='Fedora Linux' 00:01:30.147 ++ VERSION='38 (Cloud Edition)' 00:01:30.147 ++ ID=fedora 00:01:30.147 ++ VERSION_ID=38 00:01:30.147 ++ VERSION_CODENAME= 00:01:30.147 ++ PLATFORM_ID=platform:f38 00:01:30.147 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:30.147 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.147 ++ LOGO=fedora-logo-icon 00:01:30.147 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:30.147 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.147 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:30.147 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.147 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.147 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.147 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:30.147 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.147 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:30.147 ++ SUPPORT_END=2024-05-14 00:01:30.147 ++ VARIANT='Cloud Edition' 00:01:30.147 ++ VARIANT_ID=cloud 00:01:30.147 + uname -a 00:01:30.147 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:30.147 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:31.096 Hugepages 00:01:31.096 node hugesize free / total 00:01:31.096 node0 1048576kB 0 / 0 00:01:31.096 node0 2048kB 0 / 0 00:01:31.096 node1 1048576kB 0 / 0 00:01:31.096 node1 2048kB 0 / 0 00:01:31.096 00:01:31.096 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.096 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:31.096 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:31.096 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:31.096 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:31.096 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:31.096 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:31.096 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:31.096 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:31.096 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:31.096 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:31.096 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:31.096 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:31.096 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:31.096 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:31.096 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:31.096 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:31.096 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:31.096 + rm -f /tmp/spdk-ld-path 00:01:31.096 + source autorun-spdk.conf 00:01:31.096 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.096 ++ SPDK_TEST_NVMF=1 00:01:31.096 ++ SPDK_TEST_NVME_CLI=1 00:01:31.096 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.096 ++ SPDK_TEST_NVMF_NICS=e810 00:01:31.096 ++ SPDK_TEST_VFIOUSER=1 00:01:31.096 ++ SPDK_RUN_UBSAN=1 00:01:31.096 ++ NET_TYPE=phy 00:01:31.096 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:31.096 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.096 ++ RUN_NIGHTLY=1 00:01:31.096 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:31.096 + [[ -n '' ]] 00:01:31.096 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.096 + for M in /var/spdk/build-*-manifest.txt 00:01:31.096 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:31.096 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.096 + for M in /var/spdk/build-*-manifest.txt 00:01:31.096 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:31.096 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.096 ++ uname 00:01:31.096 + [[ Linux == \L\i\n\u\x ]] 00:01:31.096 + sudo dmesg -T 00:01:31.096 + sudo dmesg --clear 00:01:31.096 + dmesg_pid=1500191 00:01:31.096 + [[ Fedora Linux == FreeBSD ]] 00:01:31.096 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.096 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.096 + sudo dmesg -Tw 00:01:31.096 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:31.096 + [[ -x /usr/src/fio-static/fio ]] 00:01:31.096 + export FIO_BIN=/usr/src/fio-static/fio 00:01:31.096 + FIO_BIN=/usr/src/fio-static/fio 00:01:31.096 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:31.096 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:31.096 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:31.096 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.096 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.096 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:31.096 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.096 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.096 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.096 Test configuration: 00:01:31.097 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.097 SPDK_TEST_NVMF=1 00:01:31.097 SPDK_TEST_NVME_CLI=1 00:01:31.097 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.097 SPDK_TEST_NVMF_NICS=e810 00:01:31.097 SPDK_TEST_VFIOUSER=1 00:01:31.097 SPDK_RUN_UBSAN=1 00:01:31.097 NET_TYPE=phy 00:01:31.097 SPDK_TEST_NATIVE_DPDK=main 00:01:31.097 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.356 RUN_NIGHTLY=1 05:57:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:31.356 05:57:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.356 05:57:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.356 05:57:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.356 05:57:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.356 05:57:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.356 05:57:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.356 05:57:24 -- paths/export.sh@5 -- $ export PATH 00:01:31.356 05:57:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.356 05:57:24 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:31.356 05:57:24 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:31.356 05:57:24 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721707044.XXXXXX 00:01:31.356 05:57:24 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721707044.DM73tW 00:01:31.356 05:57:24 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:31.356 05:57:24 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:01:31.356 05:57:24 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.356 05:57:24 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:31.356 05:57:24 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:31.356 05:57:24 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.356 05:57:24 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:31.356 05:57:24 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:31.356 05:57:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.356 05:57:24 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:31.356 05:57:24 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:31.356 05:57:24 -- pm/common@17 -- $ local monitor 00:01:31.356 05:57:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.356 05:57:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.356 05:57:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.356 05:57:24 -- pm/common@21 -- $ date +%s 00:01:31.356 05:57:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.356 05:57:24 -- pm/common@21 -- $ date +%s 00:01:31.356 05:57:24 -- pm/common@25 -- $ sleep 1 00:01:31.356 05:57:24 -- pm/common@21 -- $ date +%s 00:01:31.356 05:57:24 -- pm/common@21 -- $ date +%s 00:01:31.356 05:57:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721707044 00:01:31.356 05:57:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721707044 00:01:31.356 05:57:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721707044 00:01:31.356 05:57:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721707044 00:01:31.356 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721707044_collect-vmstat.pm.log 00:01:31.356 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721707044_collect-cpu-load.pm.log 00:01:31.356 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721707044_collect-cpu-temp.pm.log 00:01:31.356 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721707044_collect-bmc-pm.bmc.pm.log 00:01:32.297 05:57:25 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:32.297 05:57:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:32.297 05:57:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:32.297 05:57:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.297 05:57:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:32.297 Tue Jul 23 03:57:25 AM UTC 2024 00:01:32.297 05:57:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:32.297 v24.09-pre-297-gf7b31b2b9 00:01:32.297 05:57:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:32.297 05:57:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:32.297 05:57:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:32.297 05:57:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:32.297 05:57:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:32.297 05:57:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.297 ************************************ 00:01:32.297 START TEST ubsan 00:01:32.297 ************************************ 00:01:32.297 05:57:25 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:32.297 using ubsan 00:01:32.297 00:01:32.297 real 0m0.000s 00:01:32.297 user 0m0.000s 00:01:32.297 sys 0m0.000s 00:01:32.297 05:57:25 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:32.297 05:57:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.297 ************************************ 00:01:32.297 END TEST ubsan 00:01:32.297 ************************************ 00:01:32.297 05:57:25 -- common/autotest_common.sh@1142 -- $ return 0 00:01:32.297 05:57:25 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:32.297 05:57:25 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:32.297 05:57:25 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:32.297 05:57:25 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:32.297 05:57:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:32.297 05:57:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.297 ************************************ 00:01:32.297 START TEST build_native_dpdk 00:01:32.297 ************************************ 00:01:32.297 05:57:25 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:32.297 90ec9b0db5 net/mlx5: replenish MPRQ buffers for miniCQEs 00:01:32.297 3f11694354 net/mlx5: fix RSS and queue action validation 00:01:32.297 e6dfb25012 net/mlx5: fix action configuration validation 00:01:32.297 cf9a91c67b net/mlx5: fix disabling E-Switch default flow rules 00:01:32.297 463e5abe09 common/mlx5: remove unneeded field when modify RQ table 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:32.297 patching file config/rte_config.h 00:01:32.297 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:32.297 05:57:25 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc2 24.07.0 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 24.07.0 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:32.297 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.298 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc2 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc2 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc2 =~ ^[0-9]+$ ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^0x ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^[a-f0-9]+$ ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:01:32.557 05:57:25 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:32.557 05:57:25 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:32.557 05:57:25 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:32.557 05:57:25 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:32.557 05:57:25 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:32.557 05:57:25 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:36.774 The Meson build system 00:01:36.774 Version: 1.3.1 00:01:36.774 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.774 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:36.774 Build type: native build 00:01:36.774 Program cat found: YES (/usr/bin/cat) 00:01:36.774 Project name: DPDK 00:01:36.774 Project version: 24.07.0-rc2 00:01:36.774 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:36.774 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:36.774 Host machine cpu family: x86_64 00:01:36.774 Host machine cpu: x86_64 00:01:36.774 Message: ## Building in Developer Mode ## 00:01:36.774 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.774 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:36.774 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.774 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:36.774 Program cat found: YES (/usr/bin/cat) 00:01:36.774 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:36.774 Compiler for C supports arguments -march=native: YES 00:01:36.774 Checking for size of "void *" : 8 00:01:36.774 Checking for size of "void *" : 8 (cached) 00:01:36.774 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:36.774 Library m found: YES 00:01:36.774 Library numa found: YES 00:01:36.774 Has header "numaif.h" : YES 00:01:36.774 Library fdt found: NO 00:01:36.774 Library execinfo found: NO 00:01:36.774 Has header "execinfo.h" : YES 00:01:36.774 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:36.774 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.774 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.774 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.774 Run-time dependency openssl found: YES 3.0.9 00:01:36.774 Run-time dependency libpcap found: YES 1.10.4 00:01:36.774 Has header "pcap.h" with dependency libpcap: YES 00:01:36.774 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.774 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.774 Compiler for C supports arguments -Wformat: YES 00:01:36.774 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.774 Compiler for C supports arguments -Wformat-security: NO 00:01:36.774 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.774 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.774 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.774 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.774 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.774 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.774 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.774 Compiler for C supports arguments -Wundef: YES 00:01:36.774 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.774 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.774 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.774 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.774 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.774 Program objdump found: YES (/usr/bin/objdump) 00:01:36.774 Compiler for C supports arguments -mavx512f: YES 00:01:36.774 Checking if "AVX512 checking" compiles: YES 00:01:36.774 Fetching value of define "__SSE4_2__" : 1 00:01:36.774 Fetching value of define "__AES__" : 1 00:01:36.774 Fetching value of define "__AVX__" : 1 00:01:36.774 Fetching value of define "__AVX2__" : (undefined) 00:01:36.774 Fetching value of define "__AVX512BW__" : (undefined) 00:01:36.774 Fetching value of define "__AVX512CD__" : (undefined) 00:01:36.774 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:36.774 Fetching value of define "__AVX512F__" : (undefined) 00:01:36.774 Fetching value of define "__AVX512VL__" : (undefined) 00:01:36.774 Fetching value of define "__PCLMUL__" : 1 00:01:36.774 Fetching value of define "__RDRND__" : 1 00:01:36.774 Fetching value of define "__RDSEED__" : (undefined) 00:01:36.774 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.774 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.774 Message: lib/log: Defining dependency "log" 00:01:36.774 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.774 Message: lib/argparse: Defining dependency "argparse" 00:01:36.774 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.774 Checking for function "getentropy" : NO 00:01:36.774 Message: lib/eal: Defining dependency "eal" 00:01:36.774 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:36.774 Message: lib/ring: Defining dependency "ring" 00:01:36.774 Message: lib/rcu: Defining dependency "rcu" 00:01:36.774 Message: lib/mempool: Defining dependency "mempool" 00:01:36.774 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.774 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.774 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.774 Compiler for C supports arguments -mpclmul: YES 00:01:36.774 Compiler for C supports arguments -maes: YES 00:01:36.774 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.774 Compiler for C supports arguments -mavx512bw: YES 00:01:36.774 Compiler for C supports arguments -mavx512dq: YES 00:01:36.774 Compiler for C supports arguments -mavx512vl: YES 00:01:36.774 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.774 Compiler for C supports arguments -mavx2: YES 00:01:36.774 Compiler for C supports arguments -mavx: YES 00:01:36.774 Message: lib/net: Defining dependency "net" 00:01:36.774 Message: lib/meter: Defining dependency "meter" 00:01:36.774 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.774 Message: lib/pci: Defining dependency "pci" 00:01:36.774 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.774 Message: lib/metrics: Defining dependency "metrics" 00:01:36.774 Message: lib/hash: Defining dependency "hash" 00:01:36.774 Message: lib/timer: Defining dependency "timer" 00:01:36.774 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.774 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:36.774 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:36.774 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:36.774 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:36.774 Message: lib/acl: Defining dependency "acl" 00:01:36.774 Message: lib/bbdev: Defining dependency "bbdev" 00:01:36.774 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:36.774 Run-time dependency libelf found: YES 0.190 00:01:36.775 Message: lib/bpf: Defining dependency "bpf" 00:01:36.775 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:36.775 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.775 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.775 Message: lib/distributor: Defining dependency "distributor" 00:01:36.775 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.775 Message: lib/efd: Defining dependency "efd" 00:01:36.775 Message: lib/eventdev: Defining dependency "eventdev" 00:01:36.775 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:36.775 Message: lib/gpudev: Defining dependency "gpudev" 00:01:36.775 Message: lib/gro: Defining dependency "gro" 00:01:36.775 Message: lib/gso: Defining dependency "gso" 00:01:36.775 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:36.775 Message: lib/jobstats: Defining dependency "jobstats" 00:01:36.775 Message: lib/latencystats: Defining dependency "latencystats" 00:01:36.775 Message: lib/lpm: Defining dependency "lpm" 00:01:36.775 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.775 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:36.775 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:36.775 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:36.775 Message: lib/member: Defining dependency "member" 00:01:36.775 Message: lib/pcapng: Defining dependency "pcapng" 00:01:36.775 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.775 Message: lib/power: Defining dependency "power" 00:01:36.775 Message: lib/rawdev: Defining dependency "rawdev" 00:01:36.775 Message: lib/regexdev: Defining dependency "regexdev" 00:01:36.775 Message: lib/mldev: Defining dependency "mldev" 00:01:36.775 Message: lib/rib: Defining dependency "rib" 00:01:36.775 Message: lib/reorder: Defining dependency "reorder" 00:01:36.775 Message: lib/sched: Defining dependency "sched" 00:01:36.775 Message: lib/security: Defining dependency "security" 00:01:36.775 Message: lib/stack: Defining dependency "stack" 00:01:36.775 Has header "linux/userfaultfd.h" : YES 00:01:36.775 Has header "linux/vduse.h" : YES 00:01:36.775 Message: lib/vhost: Defining dependency "vhost" 00:01:36.775 Message: lib/ipsec: Defining dependency "ipsec" 00:01:36.775 Message: lib/pdcp: Defining dependency "pdcp" 00:01:36.775 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.775 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:36.775 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:36.775 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:36.775 Message: lib/fib: Defining dependency "fib" 00:01:36.775 Message: lib/port: Defining dependency "port" 00:01:36.775 Message: lib/pdump: Defining dependency "pdump" 00:01:36.775 Message: lib/table: Defining dependency "table" 00:01:36.775 Message: lib/pipeline: Defining dependency "pipeline" 00:01:36.775 Message: lib/graph: Defining dependency "graph" 00:01:36.775 Message: lib/node: Defining dependency "node" 00:01:38.165 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:38.165 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:38.165 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:38.165 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:38.165 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:38.165 Compiler for C supports arguments -Wno-unused-value: YES 00:01:38.165 Compiler for C supports arguments -Wno-format: YES 00:01:38.165 Compiler for C supports arguments -Wno-format-security: YES 00:01:38.165 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:38.165 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:38.165 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:38.165 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:38.165 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:38.165 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:38.165 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:38.165 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:38.165 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:38.165 Has header "sys/epoll.h" : YES 00:01:38.165 Program doxygen found: YES (/usr/bin/doxygen) 00:01:38.165 Configuring doxy-api-html.conf using configuration 00:01:38.165 Configuring doxy-api-man.conf using configuration 00:01:38.165 Program mandb found: YES (/usr/bin/mandb) 00:01:38.165 Program sphinx-build found: NO 00:01:38.165 Configuring rte_build_config.h using configuration 00:01:38.165 Message: 00:01:38.165 ================= 00:01:38.165 Applications Enabled 00:01:38.165 ================= 00:01:38.165 00:01:38.165 apps: 00:01:38.165 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:38.165 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:38.165 test-pmd, test-regex, test-sad, test-security-perf, 00:01:38.165 00:01:38.165 Message: 00:01:38.165 ================= 00:01:38.165 Libraries Enabled 00:01:38.165 ================= 00:01:38.165 00:01:38.165 libs: 00:01:38.165 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:38.165 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:38.165 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:38.165 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:38.165 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:38.165 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:38.165 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:38.165 graph, node, 00:01:38.165 00:01:38.165 Message: 00:01:38.165 =============== 00:01:38.165 Drivers Enabled 00:01:38.165 =============== 00:01:38.165 00:01:38.165 common: 00:01:38.165 00:01:38.165 bus: 00:01:38.166 pci, vdev, 00:01:38.166 mempool: 00:01:38.166 ring, 00:01:38.166 dma: 00:01:38.166 00:01:38.166 net: 00:01:38.166 i40e, 00:01:38.166 raw: 00:01:38.166 00:01:38.166 crypto: 00:01:38.166 00:01:38.166 compress: 00:01:38.166 00:01:38.166 regex: 00:01:38.166 00:01:38.166 ml: 00:01:38.166 00:01:38.166 vdpa: 00:01:38.166 00:01:38.166 event: 00:01:38.166 00:01:38.166 baseband: 00:01:38.166 00:01:38.166 gpu: 00:01:38.166 00:01:38.166 00:01:38.166 Message: 00:01:38.166 ================= 00:01:38.166 Content Skipped 00:01:38.166 ================= 00:01:38.166 00:01:38.166 apps: 00:01:38.166 00:01:38.166 libs: 00:01:38.166 00:01:38.166 drivers: 00:01:38.166 common/cpt: not in enabled drivers build config 00:01:38.166 common/dpaax: not in enabled drivers build config 00:01:38.166 common/iavf: not in enabled drivers build config 00:01:38.166 common/idpf: not in enabled drivers build config 00:01:38.166 common/ionic: not in enabled drivers build config 00:01:38.166 common/mvep: not in enabled drivers build config 00:01:38.166 common/octeontx: not in enabled drivers build config 00:01:38.166 bus/auxiliary: not in enabled drivers build config 00:01:38.166 bus/cdx: not in enabled drivers build config 00:01:38.166 bus/dpaa: not in enabled drivers build config 00:01:38.166 bus/fslmc: not in enabled drivers build config 00:01:38.166 bus/ifpga: not in enabled drivers build config 00:01:38.166 bus/platform: not in enabled drivers build config 00:01:38.166 bus/uacce: not in enabled drivers build config 00:01:38.166 bus/vmbus: not in enabled drivers build config 00:01:38.166 common/cnxk: not in enabled drivers build config 00:01:38.166 common/mlx5: not in enabled drivers build config 00:01:38.166 common/nfp: not in enabled drivers build config 00:01:38.166 common/nitrox: not in enabled drivers build config 00:01:38.166 common/qat: not in enabled drivers build config 00:01:38.166 common/sfc_efx: not in enabled drivers build config 00:01:38.166 mempool/bucket: not in enabled drivers build config 00:01:38.166 mempool/cnxk: not in enabled drivers build config 00:01:38.166 mempool/dpaa: not in enabled drivers build config 00:01:38.166 mempool/dpaa2: not in enabled drivers build config 00:01:38.166 mempool/octeontx: not in enabled drivers build config 00:01:38.166 mempool/stack: not in enabled drivers build config 00:01:38.166 dma/cnxk: not in enabled drivers build config 00:01:38.166 dma/dpaa: not in enabled drivers build config 00:01:38.166 dma/dpaa2: not in enabled drivers build config 00:01:38.166 dma/hisilicon: not in enabled drivers build config 00:01:38.166 dma/idxd: not in enabled drivers build config 00:01:38.166 dma/ioat: not in enabled drivers build config 00:01:38.166 dma/odm: not in enabled drivers build config 00:01:38.166 dma/skeleton: not in enabled drivers build config 00:01:38.166 net/af_packet: not in enabled drivers build config 00:01:38.166 net/af_xdp: not in enabled drivers build config 00:01:38.166 net/ark: not in enabled drivers build config 00:01:38.166 net/atlantic: not in enabled drivers build config 00:01:38.166 net/avp: not in enabled drivers build config 00:01:38.166 net/axgbe: not in enabled drivers build config 00:01:38.166 net/bnx2x: not in enabled drivers build config 00:01:38.166 net/bnxt: not in enabled drivers build config 00:01:38.166 net/bonding: not in enabled drivers build config 00:01:38.166 net/cnxk: not in enabled drivers build config 00:01:38.166 net/cpfl: not in enabled drivers build config 00:01:38.166 net/cxgbe: not in enabled drivers build config 00:01:38.166 net/dpaa: not in enabled drivers build config 00:01:38.166 net/dpaa2: not in enabled drivers build config 00:01:38.166 net/e1000: not in enabled drivers build config 00:01:38.166 net/ena: not in enabled drivers build config 00:01:38.166 net/enetc: not in enabled drivers build config 00:01:38.166 net/enetfec: not in enabled drivers build config 00:01:38.166 net/enic: not in enabled drivers build config 00:01:38.166 net/failsafe: not in enabled drivers build config 00:01:38.166 net/fm10k: not in enabled drivers build config 00:01:38.166 net/gve: not in enabled drivers build config 00:01:38.166 net/hinic: not in enabled drivers build config 00:01:38.166 net/hns3: not in enabled drivers build config 00:01:38.166 net/iavf: not in enabled drivers build config 00:01:38.166 net/ice: not in enabled drivers build config 00:01:38.166 net/idpf: not in enabled drivers build config 00:01:38.166 net/igc: not in enabled drivers build config 00:01:38.166 net/ionic: not in enabled drivers build config 00:01:38.166 net/ipn3ke: not in enabled drivers build config 00:01:38.166 net/ixgbe: not in enabled drivers build config 00:01:38.166 net/mana: not in enabled drivers build config 00:01:38.166 net/memif: not in enabled drivers build config 00:01:38.166 net/mlx4: not in enabled drivers build config 00:01:38.166 net/mlx5: not in enabled drivers build config 00:01:38.166 net/mvneta: not in enabled drivers build config 00:01:38.166 net/mvpp2: not in enabled drivers build config 00:01:38.166 net/netvsc: not in enabled drivers build config 00:01:38.166 net/nfb: not in enabled drivers build config 00:01:38.166 net/nfp: not in enabled drivers build config 00:01:38.166 net/ngbe: not in enabled drivers build config 00:01:38.166 net/null: not in enabled drivers build config 00:01:38.166 net/octeontx: not in enabled drivers build config 00:01:38.166 net/octeon_ep: not in enabled drivers build config 00:01:38.166 net/pcap: not in enabled drivers build config 00:01:38.166 net/pfe: not in enabled drivers build config 00:01:38.166 net/qede: not in enabled drivers build config 00:01:38.166 net/ring: not in enabled drivers build config 00:01:38.166 net/sfc: not in enabled drivers build config 00:01:38.166 net/softnic: not in enabled drivers build config 00:01:38.166 net/tap: not in enabled drivers build config 00:01:38.166 net/thunderx: not in enabled drivers build config 00:01:38.166 net/txgbe: not in enabled drivers build config 00:01:38.166 net/vdev_netvsc: not in enabled drivers build config 00:01:38.166 net/vhost: not in enabled drivers build config 00:01:38.166 net/virtio: not in enabled drivers build config 00:01:38.166 net/vmxnet3: not in enabled drivers build config 00:01:38.166 raw/cnxk_bphy: not in enabled drivers build config 00:01:38.166 raw/cnxk_gpio: not in enabled drivers build config 00:01:38.166 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:38.166 raw/ifpga: not in enabled drivers build config 00:01:38.166 raw/ntb: not in enabled drivers build config 00:01:38.166 raw/skeleton: not in enabled drivers build config 00:01:38.166 crypto/armv8: not in enabled drivers build config 00:01:38.166 crypto/bcmfs: not in enabled drivers build config 00:01:38.166 crypto/caam_jr: not in enabled drivers build config 00:01:38.166 crypto/ccp: not in enabled drivers build config 00:01:38.166 crypto/cnxk: not in enabled drivers build config 00:01:38.166 crypto/dpaa_sec: not in enabled drivers build config 00:01:38.166 crypto/dpaa2_sec: not in enabled drivers build config 00:01:38.166 crypto/ionic: not in enabled drivers build config 00:01:38.166 crypto/ipsec_mb: not in enabled drivers build config 00:01:38.166 crypto/mlx5: not in enabled drivers build config 00:01:38.166 crypto/mvsam: not in enabled drivers build config 00:01:38.166 crypto/nitrox: not in enabled drivers build config 00:01:38.166 crypto/null: not in enabled drivers build config 00:01:38.166 crypto/octeontx: not in enabled drivers build config 00:01:38.166 crypto/openssl: not in enabled drivers build config 00:01:38.166 crypto/scheduler: not in enabled drivers build config 00:01:38.166 crypto/uadk: not in enabled drivers build config 00:01:38.166 crypto/virtio: not in enabled drivers build config 00:01:38.166 compress/isal: not in enabled drivers build config 00:01:38.166 compress/mlx5: not in enabled drivers build config 00:01:38.166 compress/nitrox: not in enabled drivers build config 00:01:38.166 compress/octeontx: not in enabled drivers build config 00:01:38.166 compress/uadk: not in enabled drivers build config 00:01:38.166 compress/zlib: not in enabled drivers build config 00:01:38.166 regex/mlx5: not in enabled drivers build config 00:01:38.166 regex/cn9k: not in enabled drivers build config 00:01:38.166 ml/cnxk: not in enabled drivers build config 00:01:38.166 vdpa/ifc: not in enabled drivers build config 00:01:38.166 vdpa/mlx5: not in enabled drivers build config 00:01:38.166 vdpa/nfp: not in enabled drivers build config 00:01:38.166 vdpa/sfc: not in enabled drivers build config 00:01:38.166 event/cnxk: not in enabled drivers build config 00:01:38.166 event/dlb2: not in enabled drivers build config 00:01:38.166 event/dpaa: not in enabled drivers build config 00:01:38.166 event/dpaa2: not in enabled drivers build config 00:01:38.166 event/dsw: not in enabled drivers build config 00:01:38.166 event/opdl: not in enabled drivers build config 00:01:38.166 event/skeleton: not in enabled drivers build config 00:01:38.166 event/sw: not in enabled drivers build config 00:01:38.166 event/octeontx: not in enabled drivers build config 00:01:38.166 baseband/acc: not in enabled drivers build config 00:01:38.166 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:38.166 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:38.166 baseband/la12xx: not in enabled drivers build config 00:01:38.166 baseband/null: not in enabled drivers build config 00:01:38.166 baseband/turbo_sw: not in enabled drivers build config 00:01:38.166 gpu/cuda: not in enabled drivers build config 00:01:38.166 00:01:38.166 00:01:38.166 Build targets in project: 224 00:01:38.166 00:01:38.166 DPDK 24.07.0-rc2 00:01:38.166 00:01:38.166 User defined options 00:01:38.166 libdir : lib 00:01:38.166 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.166 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:38.166 c_link_args : 00:01:38.166 enable_docs : false 00:01:38.166 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:38.166 enable_kmods : false 00:01:38.166 machine : native 00:01:38.166 tests : false 00:01:38.166 00:01:38.166 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.167 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:38.167 05:57:31 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:38.167 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:38.167 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.167 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:38.167 [3/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.167 [4/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:38.167 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.427 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.427 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.427 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.427 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.427 [10/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.427 [11/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.427 [12/723] Linking static target lib/librte_kvargs.a 00:01:38.427 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.427 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.427 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.688 [16/723] Linking static target lib/librte_log.a 00:01:38.688 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:38.688 [18/723] Linking static target lib/librte_argparse.a 00:01:38.949 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.949 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.213 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.214 [22/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.214 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:39.214 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.214 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:39.214 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.214 [27/723] Linking target lib/librte_log.so.24.2 00:01:39.476 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:39.476 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.476 [30/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:39.476 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.476 [32/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.476 [33/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:39.476 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:39.476 [35/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.476 [36/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:39.476 [37/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.476 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:39.476 [39/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.476 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.476 [41/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.476 [42/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.476 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.476 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:39.476 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:39.476 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:39.476 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:39.476 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:39.476 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:39.476 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.476 [51/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:39.476 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.476 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:39.476 [54/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.476 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:39.737 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.737 [57/723] Linking target lib/librte_kvargs.so.24.2 00:01:39.737 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:39.737 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:39.737 [60/723] Linking target lib/librte_argparse.so.24.2 00:01:39.737 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:39.737 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.737 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.737 [64/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:39.999 [65/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.999 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.999 [67/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.999 [68/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.999 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.999 [70/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.999 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.260 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.260 [73/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.260 [74/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:40.260 [75/723] Linking static target lib/librte_pci.a 00:01:40.519 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.519 [77/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.519 [78/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:40.519 [79/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:40.519 [80/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:40.519 [81/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.519 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.519 [83/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:40.519 [84/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.783 [85/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.783 [86/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.783 [87/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.783 [88/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.783 [89/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:40.783 [90/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.783 [91/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:40.783 [92/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.783 [93/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.783 [94/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.783 [95/723] Linking static target lib/librte_ring.a 00:01:40.783 [96/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.783 [97/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:40.783 [98/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.783 [99/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:40.783 [100/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:40.783 [101/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:40.783 [102/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.783 [103/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.783 [104/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.783 [105/723] Linking static target lib/librte_meter.a 00:01:40.783 [106/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.783 [107/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.783 [108/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.783 [109/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.783 [110/723] Linking static target lib/librte_telemetry.a 00:01:41.045 [111/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:41.045 [112/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:41.045 [113/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:41.045 [114/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:41.045 [115/723] Linking static target lib/librte_net.a 00:01:41.045 [116/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:41.045 [117/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:41.308 [118/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.308 [119/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:41.308 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:41.308 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:41.308 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:41.308 [123/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.308 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:41.308 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:41.585 [126/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.585 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:41.585 [128/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:41.585 [129/723] Linking static target lib/librte_mempool.a 00:01:41.585 [130/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:41.585 [131/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:41.585 [132/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.585 [133/723] Linking static target lib/librte_eal.a 00:01:41.585 [134/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:41.585 [135/723] Linking target lib/librte_telemetry.so.24.2 00:01:41.585 [136/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:41.585 [137/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:41.586 [138/723] Linking static target lib/librte_cmdline.a 00:01:41.853 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:41.853 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:41.853 [141/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:41.853 [142/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:41.853 [143/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:41.853 [144/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:41.853 [145/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:41.853 [146/723] Linking static target lib/librte_cfgfile.a 00:01:41.853 [147/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:41.853 [148/723] Linking static target lib/librte_metrics.a 00:01:42.117 [149/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:42.117 [150/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:42.117 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:42.117 [152/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:42.117 [153/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:42.117 [154/723] Linking static target lib/librte_rcu.a 00:01:42.117 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:42.383 [156/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:42.383 [157/723] Linking static target lib/librte_bitratestats.a 00:01:42.383 [158/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:42.383 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:42.383 [160/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:42.383 [161/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:42.383 [162/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.383 [163/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:42.649 [164/723] Linking static target lib/librte_mbuf.a 00:01:42.649 [165/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.649 [166/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:42.649 [167/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.649 [168/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.649 [169/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.649 [170/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:42.649 [171/723] Linking static target lib/librte_timer.a 00:01:42.649 [172/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:42.650 [173/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.650 [174/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:42.910 [175/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:42.910 [176/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:42.910 [177/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:42.911 [178/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:42.911 [179/723] Linking static target lib/librte_bbdev.a 00:01:42.911 [180/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:42.911 [181/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.911 [182/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.911 [183/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.911 [184/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.911 [185/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:42.911 [186/723] Linking static target lib/librte_compressdev.a 00:01:43.175 [187/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:43.175 [188/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:43.175 [189/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.175 [190/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:43.175 [191/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:43.175 [192/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:43.439 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.698 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:43.698 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:43.962 [196/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:43.962 [197/723] Linking static target lib/librte_dmadev.a 00:01:43.962 [198/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:43.962 [199/723] Linking static target lib/librte_distributor.a 00:01:43.962 [200/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.962 [201/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.962 [202/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:43.962 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:43.962 [204/723] Linking static target lib/librte_bpf.a 00:01:43.962 [205/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:43.962 [206/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:43.962 [207/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.226 [208/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:44.226 [209/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:44.226 [210/723] Linking static target lib/librte_dispatcher.a 00:01:44.226 [211/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:44.226 [212/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:44.226 [213/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:44.226 [214/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:44.226 [215/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:44.226 [216/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.226 [217/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:44.226 [218/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.226 [219/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:44.226 [220/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.226 [221/723] Linking static target lib/librte_gpudev.a 00:01:44.226 [222/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:44.226 [223/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:44.487 [224/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.487 [225/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:44.487 [226/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:44.487 [227/723] Linking static target lib/librte_gro.a 00:01:44.487 [228/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.487 [229/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:44.487 [230/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:44.487 [231/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:44.487 [232/723] Linking static target lib/librte_jobstats.a 00:01:44.487 [233/723] Linking static target lib/librte_gso.a 00:01:44.487 [234/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:44.487 [235/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [236/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:44.752 [237/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:44.752 [238/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [239/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:44.752 [240/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:44.752 [241/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [242/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [243/723] Linking static target lib/librte_latencystats.a 00:01:45.014 [244/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:45.014 [245/723] Linking static target lib/librte_ip_frag.a 00:01:45.014 [246/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:45.014 [247/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.014 [248/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:45.014 [249/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:45.014 [250/723] Linking static target lib/librte_efd.a 00:01:45.014 [251/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:45.014 [252/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:45.014 [253/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:45.014 [254/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:45.014 [255/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:45.274 [256/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.274 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:45.274 [258/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.538 [259/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:45.538 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:45.538 [261/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.538 [262/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:45.538 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:45.538 [264/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:45.538 [265/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:45.798 [266/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:45.798 [267/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:45.798 [268/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.798 [269/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:45.798 [270/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:45.798 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:45.798 [272/723] Linking static target lib/librte_regexdev.a 00:01:45.798 [273/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:45.798 [274/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:46.063 [275/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:46.063 [276/723] Linking static target lib/librte_rawdev.a 00:01:46.063 [277/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:46.063 [278/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:46.063 [279/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:46.063 [280/723] Linking static target lib/librte_pcapng.a 00:01:46.063 [281/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:46.063 [282/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:46.063 [283/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.063 [284/723] Linking static target lib/librte_power.a 00:01:46.063 [285/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:46.063 [286/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:46.063 [287/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:46.063 [288/723] Linking static target lib/librte_lpm.a 00:01:46.063 [289/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:46.063 [290/723] Linking static target lib/librte_mldev.a 00:01:46.327 [291/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:46.327 [292/723] Linking static target lib/librte_stack.a 00:01:46.327 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:46.327 [294/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.327 [295/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:46.327 [296/723] Linking static target lib/acl/libavx2_tmp.a 00:01:46.594 [297/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.594 [298/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:46.594 [299/723] Linking static target lib/librte_reorder.a 00:01:46.594 [300/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:46.594 [301/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:46.594 [302/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.594 [303/723] Linking static target lib/librte_security.a 00:01:46.594 [304/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.594 [305/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.594 [306/723] Linking static target lib/librte_cryptodev.a 00:01:46.594 [307/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.594 [308/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.594 [309/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.857 [310/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.857 [311/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:46.857 [312/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:46.857 [313/723] Linking static target lib/librte_hash.a 00:01:46.857 [314/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:46.857 [315/723] Linking static target lib/acl/libavx512_tmp.a 00:01:46.857 [316/723] Linking static target lib/librte_acl.a 00:01:47.120 [317/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:47.120 [318/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:47.120 [319/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:47.120 [320/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:47.120 [321/723] Linking static target lib/librte_rib.a 00:01:47.120 [322/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.120 [323/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:47.120 [324/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:47.120 [325/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.120 [326/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:47.120 [327/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:47.120 [328/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.120 [329/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:47.120 [330/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:47.120 [331/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.382 [332/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:47.382 [333/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:47.382 [334/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:47.382 [335/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:47.382 [336/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:47.382 [337/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:47.382 [338/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:47.382 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.645 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:47.645 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.909 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:47.909 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.909 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:48.173 [345/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:48.433 [346/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:48.433 [347/723] Linking static target lib/librte_eventdev.a 00:01:48.433 [348/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.433 [349/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:48.433 [350/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:48.433 [351/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:48.433 [352/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:48.433 [353/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:48.433 [354/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.433 [355/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:48.433 [356/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:48.433 [357/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:48.697 [358/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:48.697 [359/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:48.697 [360/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.697 [361/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.697 [362/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:48.697 [363/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:48.697 [364/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:48.697 [365/723] Linking static target lib/librte_member.a 00:01:48.697 [366/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:48.697 [367/723] Linking static target lib/librte_sched.a 00:01:48.697 [368/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:48.697 [369/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:48.697 [370/723] Linking static target lib/librte_fib.a 00:01:48.697 [371/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:48.697 [372/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:48.697 [373/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:48.697 [374/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:48.965 [375/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:48.965 [376/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:48.965 [377/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:48.965 [378/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:48.965 [379/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:49.224 [380/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.224 [381/723] Linking static target lib/librte_ethdev.a 00:01:49.224 [382/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:49.224 [383/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.224 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:49.224 [385/723] Linking static target lib/librte_ipsec.a 00:01:49.224 [386/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:49.224 [387/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.224 [388/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.487 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:49.487 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.487 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:49.753 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:49.753 [393/723] Linking static target lib/librte_pdump.a 00:01:49.753 [394/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:49.753 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:49.753 [396/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:49.753 [397/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.753 [398/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:49.753 [399/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:49.753 [400/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.753 [401/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:49.753 [402/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:50.017 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:50.017 [404/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:50.017 [405/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:50.017 [406/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:50.017 [407/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:50.017 [408/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:50.017 [409/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:50.017 [410/723] Linking static target lib/librte_pdcp.a 00:01:50.017 [411/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:50.017 [412/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.286 [413/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:50.286 [414/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:50.286 [415/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:50.286 [416/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:50.286 [417/723] Linking static target lib/librte_table.a 00:01:50.286 [418/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:50.286 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:50.545 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:50.545 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.545 [422/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:50.545 [423/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.545 [424/723] Linking static target lib/librte_graph.a 00:01:50.804 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.804 [426/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:50.804 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.804 [428/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:51.071 [429/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:51.071 [430/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:51.071 [431/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:51.071 [432/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:51.071 [433/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:51.071 [434/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:51.071 [435/723] Linking static target lib/librte_port.a 00:01:51.071 [436/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:51.071 [437/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:51.071 [438/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:51.331 [439/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:51.331 [440/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:51.331 [441/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:51.331 [442/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.331 [443/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:51.594 [444/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:51.594 [445/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:51.594 [446/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:51.594 [447/723] Linking static target drivers/librte_bus_vdev.a 00:01:51.594 [448/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:51.594 [449/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.594 [450/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.594 [451/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:51.859 [452/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:51.859 [453/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:51.859 [454/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:51.859 [455/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:51.859 [456/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:51.859 [457/723] Linking static target drivers/librte_bus_pci.a 00:01:51.859 [458/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:51.859 [459/723] Linking static target lib/librte_node.a 00:01:51.859 [460/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:51.859 [461/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.120 [462/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.120 [463/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:52.120 [464/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:52.120 [465/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:52.120 [466/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:52.120 [467/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:52.120 [468/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:52.120 [469/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:52.120 [470/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:52.120 [471/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:52.120 [472/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:52.383 [473/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.383 [474/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:52.383 [475/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.383 [476/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:52.383 [477/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.383 [478/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:52.383 [479/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:52.644 [480/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:52.644 [481/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:52.644 [482/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:52.644 [483/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:52.644 [484/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.644 [485/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.644 [486/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.644 [487/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:52.644 [488/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.644 [489/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.644 [490/723] Linking static target drivers/librte_mempool_ring.a 00:01:52.644 [491/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:52.909 [492/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:52.909 [493/723] Linking target lib/librte_eal.so.24.2 00:01:52.909 [494/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:52.909 [495/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:52.909 [496/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:53.171 [497/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:53.171 [498/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:53.171 [499/723] Linking target lib/librte_ring.so.24.2 00:01:53.171 [500/723] Linking target lib/librte_meter.so.24.2 00:01:53.442 [501/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:53.442 [502/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:53.442 [503/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:53.442 [504/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:53.442 [505/723] Linking target lib/librte_timer.so.24.2 00:01:53.442 [506/723] Linking target lib/librte_pci.so.24.2 00:01:53.442 [507/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:53.442 [508/723] Linking target lib/librte_rcu.so.24.2 00:01:53.442 [509/723] Linking target lib/librte_mempool.so.24.2 00:01:53.442 [510/723] Linking target lib/librte_acl.so.24.2 00:01:53.442 [511/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:53.442 [512/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:53.442 [513/723] Linking target lib/librte_cfgfile.so.24.2 00:01:53.442 [514/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:53.722 [515/723] Linking target lib/librte_dmadev.so.24.2 00:01:53.722 [516/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:53.722 [517/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:53.722 [518/723] Linking target lib/librte_jobstats.so.24.2 00:01:53.722 [519/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:53.722 [520/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:53.722 [521/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:53.722 [522/723] Linking target lib/librte_rawdev.so.24.2 00:01:53.722 [523/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:53.722 [524/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:53.722 [525/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:53.722 [526/723] Linking target lib/librte_stack.so.24.2 00:01:53.722 [527/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:53.722 [528/723] Linking target lib/librte_mbuf.so.24.2 00:01:53.722 [529/723] Linking target lib/librte_rib.so.24.2 00:01:53.722 [530/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:53.722 [531/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:53.722 [532/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:53.985 [533/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:53.985 [534/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:53.985 [535/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:53.985 [536/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:53.985 [537/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:53.985 [538/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:53.985 [539/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:53.985 [540/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:53.985 [541/723] Linking target lib/librte_fib.so.24.2 00:01:53.985 [542/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:53.985 [543/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:54.247 [544/723] Linking target lib/librte_net.so.24.2 00:01:54.247 [545/723] Linking target lib/librte_bbdev.so.24.2 00:01:54.247 [546/723] Linking target lib/librte_compressdev.so.24.2 00:01:54.247 [547/723] Linking target lib/librte_cryptodev.so.24.2 00:01:54.247 [548/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:54.247 [549/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:54.247 [550/723] Linking target lib/librte_distributor.so.24.2 00:01:54.248 [551/723] Linking target lib/librte_gpudev.so.24.2 00:01:54.248 [552/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:54.248 [553/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:54.248 [554/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:54.248 [555/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:54.248 [556/723] Linking target lib/librte_regexdev.so.24.2 00:01:54.248 [557/723] Linking target lib/librte_reorder.so.24.2 00:01:54.248 [558/723] Linking target lib/librte_mldev.so.24.2 00:01:54.248 [559/723] Linking target lib/librte_sched.so.24.2 00:01:54.248 [560/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:54.248 [561/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:54.248 [562/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:54.248 [563/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:54.248 [564/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:54.510 [565/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:54.510 [566/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:54.510 [567/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:54.510 [568/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:54.510 [569/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:54.510 [570/723] Linking target lib/librte_cmdline.so.24.2 00:01:54.510 [571/723] Linking target lib/librte_hash.so.24.2 00:01:54.510 [572/723] Linking target lib/librte_security.so.24.2 00:01:54.510 [573/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:54.510 [574/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:54.510 [575/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:54.510 [576/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:54.510 [577/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:54.510 [578/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:54.510 [579/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:54.510 [580/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:54.777 [581/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:54.777 [582/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:54.777 [583/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:54.777 [584/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:54.777 [585/723] Linking target lib/librte_efd.so.24.2 00:01:54.777 [586/723] Linking target lib/librte_lpm.so.24.2 00:01:54.777 [587/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:54.777 [588/723] Linking target lib/librte_member.so.24.2 00:01:54.777 [589/723] Linking target lib/librte_ipsec.so.24.2 00:01:54.777 [590/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:54.777 [591/723] Linking target lib/librte_pdcp.so.24.2 00:01:54.777 [592/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:55.040 [593/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:55.040 [594/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:55.302 [595/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:55.302 [596/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:55.302 [597/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:55.302 [598/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:55.302 [599/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:55.565 [600/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:55.565 [601/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:55.565 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:55.565 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:55.565 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:55.565 [605/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:55.565 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:55.565 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:55.829 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:55.829 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:55.829 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:55.829 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:55.829 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:55.829 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:55.829 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:55.829 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:56.089 [616/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:56.089 [617/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:56.089 [618/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:56.089 [619/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:56.089 [620/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:56.089 [621/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:56.348 [622/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:56.348 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:56.348 [624/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:56.607 [625/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:56.607 [626/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:56.866 [627/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:56.866 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:56.866 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:56.866 [630/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:56.866 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:56.866 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:56.866 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:56.866 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:56.866 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:57.125 [636/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.125 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:57.125 [638/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:57.125 [639/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:57.125 [640/723] Linking target lib/librte_ethdev.so.24.2 00:01:57.125 [641/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:57.125 [642/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:57.125 [643/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:57.125 [644/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:57.384 [645/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:57.384 [646/723] Linking target lib/librte_bpf.so.24.2 00:01:57.384 [647/723] Linking target lib/librte_pcapng.so.24.2 00:01:57.384 [648/723] Linking target lib/librte_eventdev.so.24.2 00:01:57.384 [649/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:57.384 [650/723] Linking target lib/librte_gso.so.24.2 00:01:57.384 [651/723] Linking target lib/librte_gro.so.24.2 00:01:57.384 [652/723] Linking target lib/librte_metrics.so.24.2 00:01:57.384 [653/723] Linking target lib/librte_ip_frag.so.24.2 00:01:57.384 [654/723] Linking target lib/librte_power.so.24.2 00:01:57.384 [655/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:57.384 [656/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:57.384 [657/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:57.384 [658/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:57.384 [659/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:57.642 [660/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:57.642 [661/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:57.642 [662/723] Linking target lib/librte_pdump.so.24.2 00:01:57.642 [663/723] Linking target lib/librte_graph.so.24.2 00:01:57.642 [664/723] Linking target lib/librte_bitratestats.so.24.2 00:01:57.642 [665/723] Linking target lib/librte_latencystats.so.24.2 00:01:57.642 [666/723] Linking target lib/librte_dispatcher.so.24.2 00:01:57.642 [667/723] Linking target lib/librte_port.so.24.2 00:01:57.642 [668/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:57.643 [669/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:57.643 [670/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:57.643 [671/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:57.643 [672/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:57.643 [673/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:57.643 [674/723] Linking target lib/librte_node.so.24.2 00:01:57.900 [675/723] Linking target lib/librte_table.so.24.2 00:01:57.900 [676/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:57.900 [677/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:57.900 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:58.467 [679/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:58.467 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:58.725 [681/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:58.725 [682/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:58.725 [683/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:58.984 [684/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.984 [685/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:58.984 [686/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:58.984 [687/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.984 [688/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.984 [689/723] Linking static target drivers/librte_net_i40e.a 00:01:59.550 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:59.550 [691/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.808 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:02:00.374 [693/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:00.631 [694/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:01.564 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:09.672 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:09.672 [697/723] Linking static target lib/librte_pipeline.a 00:02:09.672 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.672 [699/723] Linking static target lib/librte_vhost.a 00:02:09.672 [700/723] Linking target app/dpdk-test-dma-perf 00:02:09.672 [701/723] Linking target app/dpdk-test-fib 00:02:09.672 [702/723] Linking target app/dpdk-test-sad 00:02:09.672 [703/723] Linking target app/dpdk-test-gpudev 00:02:09.672 [704/723] Linking target app/dpdk-test-regex 00:02:09.672 [705/723] Linking target app/dpdk-pdump 00:02:09.672 [706/723] Linking target app/dpdk-test-compress-perf 00:02:09.672 [707/723] Linking target app/dpdk-test-cmdline 00:02:09.672 [708/723] Linking target app/dpdk-test-pipeline 00:02:09.672 [709/723] Linking target app/dpdk-test-acl 00:02:09.672 [710/723] Linking target app/dpdk-test-flow-perf 00:02:09.672 [711/723] Linking target app/dpdk-test-mldev 00:02:09.672 [712/723] Linking target app/dpdk-graph 00:02:09.672 [713/723] Linking target app/dpdk-proc-info 00:02:09.672 [714/723] Linking target app/dpdk-test-security-perf 00:02:09.672 [715/723] Linking target app/dpdk-test-crypto-perf 00:02:09.672 [716/723] Linking target app/dpdk-test-bbdev 00:02:09.672 [717/723] Linking target app/dpdk-dumpcap 00:02:09.672 [718/723] Linking target app/dpdk-test-eventdev 00:02:09.672 [719/723] Linking target app/dpdk-testpmd 00:02:10.238 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.238 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:11.619 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.619 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:11.619 05:58:04 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:11.619 05:58:04 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:11.619 05:58:04 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:11.619 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:11.619 [0/1] Installing files. 00:02:11.883 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:11.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:11.883 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:11.884 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.884 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.885 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:11.886 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.887 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.888 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.889 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.889 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.890 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.465 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:12.466 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:12.466 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:12.466 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.466 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:12.466 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:12.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:12.470 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:12.470 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:12.470 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:12.470 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:12.470 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:12.470 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:12.470 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:12.470 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:12.470 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:12.470 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:12.470 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:12.470 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:12.470 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:12.470 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:12.470 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:12.470 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:12.470 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:12.470 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:12.470 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:12.470 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:12.470 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:12.470 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:12.470 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:12.470 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:12.470 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:12.470 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:12.470 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:12.470 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:12.470 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:12.470 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:12.470 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:12.470 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:12.470 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:12.470 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:12.470 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:12.470 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:12.470 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:12.470 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:12.470 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:12.470 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:12.470 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:12.470 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:12.470 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:12.470 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:12.470 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:12.470 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:12.470 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:12.470 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:12.470 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:12.470 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:12.470 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:12.471 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:12.471 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:12.471 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:12.471 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:12.471 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:12.471 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:12.471 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:12.471 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:12.471 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:12.471 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:12.471 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:12.471 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:12.471 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:12.471 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:12.471 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:12.471 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:12.471 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:12.471 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:12.471 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:12.471 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:12.471 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:12.471 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:12.471 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:12.471 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:12.471 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:12.471 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:12.471 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:12.471 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:12.471 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:12.471 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:12.471 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:12.471 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:12.471 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:12.471 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:12.471 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:12.471 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:12.471 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:12.471 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:12.471 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:12.471 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:12.471 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:12.471 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:12.471 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:12.471 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:12.471 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:12.471 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:12.471 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:12.471 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:12.471 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:12.471 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:12.471 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:12.471 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:12.471 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:12.471 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:12.471 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:12.471 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:12.471 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:12.471 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:12.471 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:12.471 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:12.471 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:12.471 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:12.471 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:12.471 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:12.471 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:12.471 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:12.471 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:12.471 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:12.471 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:12.471 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:12.471 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:12.471 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:12.471 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:12.471 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:12.471 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:12.471 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:12.471 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:12.471 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:12.471 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:12.471 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:12.471 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:12.471 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:12.471 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:12.471 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:12.471 05:58:05 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:12.471 05:58:05 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.471 00:02:12.471 real 0m40.081s 00:02:12.471 user 13m55.935s 00:02:12.471 sys 2m0.737s 00:02:12.471 05:58:05 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:12.471 05:58:05 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:12.471 ************************************ 00:02:12.471 END TEST build_native_dpdk 00:02:12.471 ************************************ 00:02:12.471 05:58:05 -- common/autotest_common.sh@1142 -- $ return 0 00:02:12.471 05:58:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:12.471 05:58:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:12.471 05:58:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:12.471 05:58:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:12.471 05:58:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:12.471 05:58:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:12.471 05:58:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:12.472 05:58:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:12.472 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:12.732 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.732 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:12.732 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:12.991 Using 'verbs' RDMA provider 00:02:23.545 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:33.530 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:33.530 Creating mk/config.mk...done. 00:02:33.530 Creating mk/cc.flags.mk...done. 00:02:33.530 Type 'make' to build. 00:02:33.530 05:58:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:33.530 05:58:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:33.530 05:58:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:33.530 05:58:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.530 ************************************ 00:02:33.530 START TEST make 00:02:33.530 ************************************ 00:02:33.530 05:58:25 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:33.530 make[1]: Nothing to be done for 'all'. 00:02:33.794 The Meson build system 00:02:33.794 Version: 1.3.1 00:02:33.794 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:33.794 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:33.794 Build type: native build 00:02:33.794 Project name: libvfio-user 00:02:33.794 Project version: 0.0.1 00:02:33.794 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:33.794 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:33.794 Host machine cpu family: x86_64 00:02:33.794 Host machine cpu: x86_64 00:02:33.794 Run-time dependency threads found: YES 00:02:33.794 Library dl found: YES 00:02:33.794 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:33.794 Run-time dependency json-c found: YES 0.17 00:02:33.794 Run-time dependency cmocka found: YES 1.1.7 00:02:33.794 Program pytest-3 found: NO 00:02:33.794 Program flake8 found: NO 00:02:33.794 Program misspell-fixer found: NO 00:02:33.794 Program restructuredtext-lint found: NO 00:02:33.794 Program valgrind found: YES (/usr/bin/valgrind) 00:02:33.794 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.794 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.794 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.794 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:33.794 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:33.794 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:33.794 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:33.794 Build targets in project: 8 00:02:33.794 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:33.794 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:33.794 00:02:33.794 libvfio-user 0.0.1 00:02:33.794 00:02:33.794 User defined options 00:02:33.794 buildtype : debug 00:02:33.794 default_library: shared 00:02:33.794 libdir : /usr/local/lib 00:02:33.794 00:02:33.794 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.748 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:34.748 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:34.748 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:34.748 [3/37] Compiling C object samples/null.p/null.c.o 00:02:34.748 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:34.748 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:34.748 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:34.748 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:34.748 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:34.748 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:35.008 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:35.008 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:35.008 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:35.008 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:35.008 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:35.008 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:35.008 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:35.008 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:35.008 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:35.008 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:35.008 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:35.008 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:35.008 [22/37] Compiling C object samples/server.p/server.c.o 00:02:35.008 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:35.008 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:35.008 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:35.008 [26/37] Compiling C object samples/client.p/client.c.o 00:02:35.008 [27/37] Linking target samples/client 00:02:35.008 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:35.270 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:35.270 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:35.270 [31/37] Linking target test/unit_tests 00:02:35.270 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:35.532 [33/37] Linking target samples/server 00:02:35.532 [34/37] Linking target samples/null 00:02:35.532 [35/37] Linking target samples/gpio-pci-idio-16 00:02:35.532 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:35.532 [37/37] Linking target samples/lspci 00:02:35.532 INFO: autodetecting backend as ninja 00:02:35.532 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:35.532 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.104 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:36.104 ninja: no work to do. 00:02:51.002 CC lib/ut/ut.o 00:02:51.002 CC lib/ut_mock/mock.o 00:02:51.002 CC lib/log/log.o 00:02:51.002 CC lib/log/log_flags.o 00:02:51.002 CC lib/log/log_deprecated.o 00:02:51.002 LIB libspdk_log.a 00:02:51.002 LIB libspdk_ut.a 00:02:51.002 LIB libspdk_ut_mock.a 00:02:51.002 SO libspdk_ut.so.2.0 00:02:51.002 SO libspdk_ut_mock.so.6.0 00:02:51.002 SO libspdk_log.so.7.0 00:02:51.002 SYMLINK libspdk_ut_mock.so 00:02:51.002 SYMLINK libspdk_ut.so 00:02:51.002 SYMLINK libspdk_log.so 00:02:51.002 CC lib/ioat/ioat.o 00:02:51.002 CC lib/dma/dma.o 00:02:51.002 CXX lib/trace_parser/trace.o 00:02:51.002 CC lib/util/base64.o 00:02:51.002 CC lib/util/bit_array.o 00:02:51.002 CC lib/util/cpuset.o 00:02:51.002 CC lib/util/crc16.o 00:02:51.002 CC lib/util/crc32.o 00:02:51.002 CC lib/util/crc32c.o 00:02:51.002 CC lib/util/crc32_ieee.o 00:02:51.002 CC lib/util/crc64.o 00:02:51.002 CC lib/util/dif.o 00:02:51.002 CC lib/util/fd.o 00:02:51.002 CC lib/util/fd_group.o 00:02:51.002 CC lib/util/file.o 00:02:51.002 CC lib/util/hexlify.o 00:02:51.002 CC lib/util/iov.o 00:02:51.002 CC lib/util/math.o 00:02:51.002 CC lib/util/net.o 00:02:51.002 CC lib/util/pipe.o 00:02:51.002 CC lib/util/strerror_tls.o 00:02:51.002 CC lib/util/string.o 00:02:51.002 CC lib/util/uuid.o 00:02:51.002 CC lib/util/zipf.o 00:02:51.002 CC lib/util/xor.o 00:02:51.002 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.002 CC lib/vfio_user/host/vfio_user.o 00:02:51.002 LIB libspdk_dma.a 00:02:51.002 SO libspdk_dma.so.4.0 00:02:51.002 SYMLINK libspdk_dma.so 00:02:51.002 LIB libspdk_ioat.a 00:02:51.002 SO libspdk_ioat.so.7.0 00:02:51.002 LIB libspdk_vfio_user.a 00:02:51.002 SO libspdk_vfio_user.so.5.0 00:02:51.002 SYMLINK libspdk_ioat.so 00:02:51.002 SYMLINK libspdk_vfio_user.so 00:02:51.002 LIB libspdk_util.a 00:02:51.002 SO libspdk_util.so.10.0 00:02:51.002 SYMLINK libspdk_util.so 00:02:51.002 LIB libspdk_trace_parser.a 00:02:51.002 CC lib/json/json_parse.o 00:02:51.002 CC lib/json/json_util.o 00:02:51.002 CC lib/rdma_utils/rdma_utils.o 00:02:51.002 CC lib/json/json_write.o 00:02:51.002 CC lib/conf/conf.o 00:02:51.002 CC lib/idxd/idxd.o 00:02:51.002 CC lib/rdma_provider/common.o 00:02:51.002 CC lib/idxd/idxd_user.o 00:02:51.002 CC lib/env_dpdk/env.o 00:02:51.002 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.002 CC lib/idxd/idxd_kernel.o 00:02:51.002 CC lib/env_dpdk/memory.o 00:02:51.002 CC lib/env_dpdk/pci.o 00:02:51.002 CC lib/vmd/vmd.o 00:02:51.002 CC lib/env_dpdk/init.o 00:02:51.002 CC lib/vmd/led.o 00:02:51.002 CC lib/env_dpdk/threads.o 00:02:51.002 CC lib/env_dpdk/pci_ioat.o 00:02:51.002 CC lib/env_dpdk/pci_virtio.o 00:02:51.002 CC lib/env_dpdk/pci_vmd.o 00:02:51.002 CC lib/env_dpdk/pci_idxd.o 00:02:51.002 CC lib/env_dpdk/pci_event.o 00:02:51.002 CC lib/env_dpdk/sigbus_handler.o 00:02:51.002 CC lib/env_dpdk/pci_dpdk.o 00:02:51.002 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.002 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.002 SO libspdk_trace_parser.so.5.0 00:02:51.002 SYMLINK libspdk_trace_parser.so 00:02:51.002 LIB libspdk_rdma_provider.a 00:02:51.002 SO libspdk_rdma_provider.so.6.0 00:02:51.002 SYMLINK libspdk_rdma_provider.so 00:02:51.002 LIB libspdk_conf.a 00:02:51.002 SO libspdk_conf.so.6.0 00:02:51.002 LIB libspdk_rdma_utils.a 00:02:51.002 SO libspdk_rdma_utils.so.1.0 00:02:51.002 SYMLINK libspdk_conf.so 00:02:51.002 LIB libspdk_json.a 00:02:51.002 SYMLINK libspdk_rdma_utils.so 00:02:51.002 SO libspdk_json.so.6.0 00:02:51.002 SYMLINK libspdk_json.so 00:02:51.002 LIB libspdk_idxd.a 00:02:51.002 SO libspdk_idxd.so.12.0 00:02:51.002 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.002 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.002 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.002 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:51.002 LIB libspdk_vmd.a 00:02:51.002 SYMLINK libspdk_idxd.so 00:02:51.002 SO libspdk_vmd.so.6.0 00:02:51.002 SYMLINK libspdk_vmd.so 00:02:51.002 LIB libspdk_jsonrpc.a 00:02:51.002 SO libspdk_jsonrpc.so.6.0 00:02:51.002 SYMLINK libspdk_jsonrpc.so 00:02:51.260 CC lib/rpc/rpc.o 00:02:51.519 LIB libspdk_rpc.a 00:02:51.519 LIB libspdk_env_dpdk.a 00:02:51.519 SO libspdk_rpc.so.6.0 00:02:51.519 SO libspdk_env_dpdk.so.15.0 00:02:51.519 SYMLINK libspdk_rpc.so 00:02:51.778 SYMLINK libspdk_env_dpdk.so 00:02:51.778 CC lib/notify/notify.o 00:02:51.778 CC lib/keyring/keyring.o 00:02:51.778 CC lib/notify/notify_rpc.o 00:02:51.778 CC lib/keyring/keyring_rpc.o 00:02:51.778 CC lib/trace/trace.o 00:02:51.778 CC lib/trace/trace_flags.o 00:02:51.778 CC lib/trace/trace_rpc.o 00:02:51.778 LIB libspdk_notify.a 00:02:51.778 SO libspdk_notify.so.6.0 00:02:52.036 SYMLINK libspdk_notify.so 00:02:52.036 LIB libspdk_keyring.a 00:02:52.036 LIB libspdk_trace.a 00:02:52.036 SO libspdk_keyring.so.1.0 00:02:52.036 SO libspdk_trace.so.10.0 00:02:52.036 SYMLINK libspdk_keyring.so 00:02:52.036 SYMLINK libspdk_trace.so 00:02:52.295 CC lib/thread/thread.o 00:02:52.295 CC lib/thread/iobuf.o 00:02:52.295 CC lib/sock/sock.o 00:02:52.295 CC lib/sock/sock_rpc.o 00:02:52.553 LIB libspdk_sock.a 00:02:52.553 SO libspdk_sock.so.10.0 00:02:52.553 SYMLINK libspdk_sock.so 00:02:52.812 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.812 CC lib/nvme/nvme_ctrlr.o 00:02:52.812 CC lib/nvme/nvme_fabric.o 00:02:52.812 CC lib/nvme/nvme_ns_cmd.o 00:02:52.812 CC lib/nvme/nvme_ns.o 00:02:52.812 CC lib/nvme/nvme_pcie_common.o 00:02:52.812 CC lib/nvme/nvme_pcie.o 00:02:52.812 CC lib/nvme/nvme_qpair.o 00:02:52.812 CC lib/nvme/nvme.o 00:02:52.812 CC lib/nvme/nvme_quirks.o 00:02:52.812 CC lib/nvme/nvme_transport.o 00:02:52.812 CC lib/nvme/nvme_discovery.o 00:02:52.812 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.812 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.812 CC lib/nvme/nvme_tcp.o 00:02:52.812 CC lib/nvme/nvme_opal.o 00:02:52.812 CC lib/nvme/nvme_io_msg.o 00:02:52.812 CC lib/nvme/nvme_poll_group.o 00:02:52.812 CC lib/nvme/nvme_zns.o 00:02:52.812 CC lib/nvme/nvme_stubs.o 00:02:52.812 CC lib/nvme/nvme_auth.o 00:02:52.812 CC lib/nvme/nvme_cuse.o 00:02:52.812 CC lib/nvme/nvme_rdma.o 00:02:52.812 CC lib/nvme/nvme_vfio_user.o 00:02:53.748 LIB libspdk_thread.a 00:02:53.748 SO libspdk_thread.so.10.1 00:02:53.748 SYMLINK libspdk_thread.so 00:02:54.006 CC lib/accel/accel.o 00:02:54.006 CC lib/vfu_tgt/tgt_endpoint.o 00:02:54.006 CC lib/init/json_config.o 00:02:54.006 CC lib/blob/blobstore.o 00:02:54.006 CC lib/virtio/virtio.o 00:02:54.006 CC lib/init/subsystem.o 00:02:54.006 CC lib/vfu_tgt/tgt_rpc.o 00:02:54.006 CC lib/accel/accel_rpc.o 00:02:54.006 CC lib/blob/request.o 00:02:54.006 CC lib/accel/accel_sw.o 00:02:54.006 CC lib/virtio/virtio_vhost_user.o 00:02:54.006 CC lib/init/subsystem_rpc.o 00:02:54.006 CC lib/blob/zeroes.o 00:02:54.006 CC lib/virtio/virtio_vfio_user.o 00:02:54.006 CC lib/init/rpc.o 00:02:54.006 CC lib/blob/blob_bs_dev.o 00:02:54.006 CC lib/virtio/virtio_pci.o 00:02:54.265 LIB libspdk_init.a 00:02:54.265 SO libspdk_init.so.5.0 00:02:54.265 LIB libspdk_virtio.a 00:02:54.265 LIB libspdk_vfu_tgt.a 00:02:54.265 SYMLINK libspdk_init.so 00:02:54.265 SO libspdk_virtio.so.7.0 00:02:54.523 SO libspdk_vfu_tgt.so.3.0 00:02:54.523 SYMLINK libspdk_vfu_tgt.so 00:02:54.523 SYMLINK libspdk_virtio.so 00:02:54.523 CC lib/event/app.o 00:02:54.523 CC lib/event/reactor.o 00:02:54.523 CC lib/event/log_rpc.o 00:02:54.523 CC lib/event/app_rpc.o 00:02:54.523 CC lib/event/scheduler_static.o 00:02:55.090 LIB libspdk_event.a 00:02:55.090 SO libspdk_event.so.14.0 00:02:55.090 LIB libspdk_accel.a 00:02:55.090 SYMLINK libspdk_event.so 00:02:55.090 SO libspdk_accel.so.16.0 00:02:55.090 SYMLINK libspdk_accel.so 00:02:55.348 LIB libspdk_nvme.a 00:02:55.348 CC lib/bdev/bdev.o 00:02:55.348 CC lib/bdev/bdev_rpc.o 00:02:55.348 CC lib/bdev/bdev_zone.o 00:02:55.348 CC lib/bdev/part.o 00:02:55.348 CC lib/bdev/scsi_nvme.o 00:02:55.348 SO libspdk_nvme.so.13.1 00:02:55.607 SYMLINK libspdk_nvme.so 00:02:56.984 LIB libspdk_blob.a 00:02:56.984 SO libspdk_blob.so.11.0 00:02:57.241 SYMLINK libspdk_blob.so 00:02:57.241 CC lib/lvol/lvol.o 00:02:57.241 CC lib/blobfs/blobfs.o 00:02:57.241 CC lib/blobfs/tree.o 00:02:57.808 LIB libspdk_bdev.a 00:02:57.808 SO libspdk_bdev.so.16.0 00:02:58.066 SYMLINK libspdk_bdev.so 00:02:58.066 LIB libspdk_blobfs.a 00:02:58.066 SO libspdk_blobfs.so.10.0 00:02:58.066 CC lib/scsi/dev.o 00:02:58.066 CC lib/ublk/ublk.o 00:02:58.066 CC lib/nbd/nbd.o 00:02:58.066 CC lib/ublk/ublk_rpc.o 00:02:58.066 CC lib/nbd/nbd_rpc.o 00:02:58.066 CC lib/nvmf/ctrlr.o 00:02:58.066 CC lib/scsi/lun.o 00:02:58.066 CC lib/ftl/ftl_core.o 00:02:58.066 CC lib/scsi/port.o 00:02:58.066 CC lib/nvmf/ctrlr_discovery.o 00:02:58.066 CC lib/ftl/ftl_init.o 00:02:58.066 CC lib/scsi/scsi.o 00:02:58.066 CC lib/ftl/ftl_layout.o 00:02:58.066 CC lib/scsi/scsi_bdev.o 00:02:58.066 CC lib/nvmf/ctrlr_bdev.o 00:02:58.066 CC lib/ftl/ftl_debug.o 00:02:58.066 CC lib/nvmf/subsystem.o 00:02:58.066 CC lib/scsi/scsi_pr.o 00:02:58.066 CC lib/ftl/ftl_io.o 00:02:58.066 CC lib/nvmf/nvmf.o 00:02:58.066 CC lib/ftl/ftl_sb.o 00:02:58.066 CC lib/scsi/task.o 00:02:58.066 CC lib/scsi/scsi_rpc.o 00:02:58.066 CC lib/nvmf/nvmf_rpc.o 00:02:58.066 CC lib/ftl/ftl_l2p.o 00:02:58.066 CC lib/nvmf/transport.o 00:02:58.066 CC lib/nvmf/tcp.o 00:02:58.066 CC lib/ftl/ftl_l2p_flat.o 00:02:58.066 CC lib/ftl/ftl_nv_cache.o 00:02:58.066 CC lib/nvmf/stubs.o 00:02:58.066 CC lib/ftl/ftl_band.o 00:02:58.066 CC lib/nvmf/mdns_server.o 00:02:58.066 CC lib/ftl/ftl_band_ops.o 00:02:58.066 CC lib/nvmf/vfio_user.o 00:02:58.066 CC lib/nvmf/rdma.o 00:02:58.066 CC lib/ftl/ftl_writer.o 00:02:58.066 CC lib/nvmf/auth.o 00:02:58.066 CC lib/ftl/ftl_rq.o 00:02:58.333 CC lib/ftl/ftl_reloc.o 00:02:58.333 CC lib/ftl/ftl_l2p_cache.o 00:02:58.333 CC lib/ftl/ftl_p2l.o 00:02:58.333 CC lib/ftl/mngt/ftl_mngt.o 00:02:58.333 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:58.333 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:58.333 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.333 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.333 LIB libspdk_lvol.a 00:02:58.333 SYMLINK libspdk_blobfs.so 00:02:58.333 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.333 SO libspdk_lvol.so.10.0 00:02:58.333 SYMLINK libspdk_lvol.so 00:02:58.333 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:58.594 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:58.594 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:58.594 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:58.594 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:58.594 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:58.594 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:58.594 CC lib/ftl/utils/ftl_conf.o 00:02:58.594 CC lib/ftl/utils/ftl_md.o 00:02:58.594 CC lib/ftl/utils/ftl_mempool.o 00:02:58.594 CC lib/ftl/utils/ftl_bitmap.o 00:02:58.594 CC lib/ftl/utils/ftl_property.o 00:02:58.594 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:58.594 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:58.594 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:58.594 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:58.854 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:58.854 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:58.854 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:58.854 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:58.854 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:58.854 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:58.854 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:58.854 CC lib/ftl/base/ftl_base_dev.o 00:02:58.854 CC lib/ftl/base/ftl_base_bdev.o 00:02:58.854 CC lib/ftl/ftl_trace.o 00:02:59.112 LIB libspdk_nbd.a 00:02:59.112 SO libspdk_nbd.so.7.0 00:02:59.112 SYMLINK libspdk_nbd.so 00:02:59.112 LIB libspdk_scsi.a 00:02:59.112 SO libspdk_scsi.so.9.0 00:02:59.371 LIB libspdk_ublk.a 00:02:59.371 SO libspdk_ublk.so.3.0 00:02:59.371 SYMLINK libspdk_scsi.so 00:02:59.371 SYMLINK libspdk_ublk.so 00:02:59.371 CC lib/vhost/vhost.o 00:02:59.371 CC lib/iscsi/conn.o 00:02:59.371 CC lib/vhost/vhost_rpc.o 00:02:59.371 CC lib/iscsi/init_grp.o 00:02:59.371 CC lib/iscsi/iscsi.o 00:02:59.371 CC lib/vhost/vhost_scsi.o 00:02:59.371 CC lib/vhost/vhost_blk.o 00:02:59.371 CC lib/iscsi/md5.o 00:02:59.371 CC lib/vhost/rte_vhost_user.o 00:02:59.371 CC lib/iscsi/param.o 00:02:59.371 CC lib/iscsi/portal_grp.o 00:02:59.371 CC lib/iscsi/tgt_node.o 00:02:59.371 CC lib/iscsi/iscsi_subsystem.o 00:02:59.371 CC lib/iscsi/iscsi_rpc.o 00:02:59.371 CC lib/iscsi/task.o 00:02:59.630 LIB libspdk_ftl.a 00:02:59.890 SO libspdk_ftl.so.9.0 00:03:00.158 SYMLINK libspdk_ftl.so 00:03:00.724 LIB libspdk_vhost.a 00:03:00.724 SO libspdk_vhost.so.8.0 00:03:00.724 LIB libspdk_nvmf.a 00:03:00.724 SYMLINK libspdk_vhost.so 00:03:00.982 SO libspdk_nvmf.so.19.0 00:03:00.982 LIB libspdk_iscsi.a 00:03:00.982 SO libspdk_iscsi.so.8.0 00:03:00.982 SYMLINK libspdk_nvmf.so 00:03:01.240 SYMLINK libspdk_iscsi.so 00:03:01.499 CC module/env_dpdk/env_dpdk_rpc.o 00:03:01.499 CC module/vfu_device/vfu_virtio.o 00:03:01.499 CC module/vfu_device/vfu_virtio_blk.o 00:03:01.499 CC module/vfu_device/vfu_virtio_scsi.o 00:03:01.499 CC module/vfu_device/vfu_virtio_rpc.o 00:03:01.499 CC module/keyring/file/keyring.o 00:03:01.499 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:01.499 CC module/keyring/file/keyring_rpc.o 00:03:01.499 CC module/accel/error/accel_error.o 00:03:01.499 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:01.499 CC module/accel/error/accel_error_rpc.o 00:03:01.499 CC module/accel/iaa/accel_iaa.o 00:03:01.499 CC module/accel/iaa/accel_iaa_rpc.o 00:03:01.499 CC module/accel/ioat/accel_ioat.o 00:03:01.499 CC module/accel/dsa/accel_dsa.o 00:03:01.499 CC module/accel/dsa/accel_dsa_rpc.o 00:03:01.499 CC module/accel/ioat/accel_ioat_rpc.o 00:03:01.499 CC module/sock/posix/posix.o 00:03:01.499 CC module/blob/bdev/blob_bdev.o 00:03:01.499 CC module/scheduler/gscheduler/gscheduler.o 00:03:01.499 CC module/keyring/linux/keyring.o 00:03:01.499 CC module/keyring/linux/keyring_rpc.o 00:03:01.499 LIB libspdk_env_dpdk_rpc.a 00:03:01.499 SO libspdk_env_dpdk_rpc.so.6.0 00:03:01.758 SYMLINK libspdk_env_dpdk_rpc.so 00:03:01.758 LIB libspdk_keyring_linux.a 00:03:01.758 LIB libspdk_keyring_file.a 00:03:01.758 LIB libspdk_scheduler_gscheduler.a 00:03:01.758 LIB libspdk_scheduler_dpdk_governor.a 00:03:01.758 SO libspdk_keyring_linux.so.1.0 00:03:01.758 SO libspdk_keyring_file.so.1.0 00:03:01.758 LIB libspdk_accel_error.a 00:03:01.758 SO libspdk_scheduler_gscheduler.so.4.0 00:03:01.758 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:01.758 LIB libspdk_accel_ioat.a 00:03:01.758 LIB libspdk_scheduler_dynamic.a 00:03:01.758 LIB libspdk_accel_iaa.a 00:03:01.758 SO libspdk_accel_error.so.2.0 00:03:01.758 SO libspdk_scheduler_dynamic.so.4.0 00:03:01.758 SO libspdk_accel_ioat.so.6.0 00:03:01.758 SYMLINK libspdk_keyring_file.so 00:03:01.758 SYMLINK libspdk_keyring_linux.so 00:03:01.758 SYMLINK libspdk_scheduler_gscheduler.so 00:03:01.758 SO libspdk_accel_iaa.so.3.0 00:03:01.758 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:01.758 LIB libspdk_accel_dsa.a 00:03:01.758 SYMLINK libspdk_accel_error.so 00:03:01.758 SYMLINK libspdk_scheduler_dynamic.so 00:03:01.758 SYMLINK libspdk_accel_ioat.so 00:03:01.758 LIB libspdk_blob_bdev.a 00:03:01.758 SO libspdk_accel_dsa.so.5.0 00:03:01.758 SYMLINK libspdk_accel_iaa.so 00:03:01.758 SO libspdk_blob_bdev.so.11.0 00:03:01.758 SYMLINK libspdk_accel_dsa.so 00:03:02.017 SYMLINK libspdk_blob_bdev.so 00:03:02.017 LIB libspdk_vfu_device.a 00:03:02.017 SO libspdk_vfu_device.so.3.0 00:03:02.280 CC module/bdev/error/vbdev_error.o 00:03:02.280 CC module/bdev/malloc/bdev_malloc.o 00:03:02.280 CC module/bdev/passthru/vbdev_passthru.o 00:03:02.280 CC module/bdev/delay/vbdev_delay.o 00:03:02.280 CC module/blobfs/bdev/blobfs_bdev.o 00:03:02.280 CC module/bdev/error/vbdev_error_rpc.o 00:03:02.280 CC module/bdev/nvme/bdev_nvme.o 00:03:02.280 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:02.280 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:02.280 CC module/bdev/null/bdev_null.o 00:03:02.280 CC module/bdev/gpt/gpt.o 00:03:02.280 CC module/bdev/gpt/vbdev_gpt.o 00:03:02.280 CC module/bdev/split/vbdev_split.o 00:03:02.280 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:02.280 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:02.280 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:02.280 CC module/bdev/null/bdev_null_rpc.o 00:03:02.280 CC module/bdev/split/vbdev_split_rpc.o 00:03:02.280 CC module/bdev/iscsi/bdev_iscsi.o 00:03:02.280 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:02.280 CC module/bdev/lvol/vbdev_lvol.o 00:03:02.280 CC module/bdev/ftl/bdev_ftl.o 00:03:02.280 CC module/bdev/aio/bdev_aio.o 00:03:02.280 CC module/bdev/raid/bdev_raid.o 00:03:02.280 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:02.280 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:02.280 CC module/bdev/nvme/nvme_rpc.o 00:03:02.280 CC module/bdev/nvme/bdev_mdns_client.o 00:03:02.280 CC module/bdev/raid/bdev_raid_rpc.o 00:03:02.280 CC module/bdev/aio/bdev_aio_rpc.o 00:03:02.280 CC module/bdev/raid/bdev_raid_sb.o 00:03:02.280 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:02.280 CC module/bdev/nvme/vbdev_opal.o 00:03:02.280 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:02.280 CC module/bdev/raid/raid0.o 00:03:02.280 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:02.280 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:02.280 CC module/bdev/raid/raid1.o 00:03:02.280 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:02.280 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:02.280 CC module/bdev/raid/concat.o 00:03:02.280 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:02.280 SYMLINK libspdk_vfu_device.so 00:03:02.539 LIB libspdk_sock_posix.a 00:03:02.539 SO libspdk_sock_posix.so.6.0 00:03:02.539 LIB libspdk_blobfs_bdev.a 00:03:02.539 LIB libspdk_bdev_gpt.a 00:03:02.539 SO libspdk_blobfs_bdev.so.6.0 00:03:02.539 SYMLINK libspdk_sock_posix.so 00:03:02.539 SO libspdk_bdev_gpt.so.6.0 00:03:02.539 LIB libspdk_bdev_error.a 00:03:02.539 SYMLINK libspdk_blobfs_bdev.so 00:03:02.539 LIB libspdk_bdev_split.a 00:03:02.539 LIB libspdk_bdev_delay.a 00:03:02.539 SO libspdk_bdev_error.so.6.0 00:03:02.539 SYMLINK libspdk_bdev_gpt.so 00:03:02.539 LIB libspdk_bdev_ftl.a 00:03:02.539 SO libspdk_bdev_split.so.6.0 00:03:02.539 SO libspdk_bdev_delay.so.6.0 00:03:02.799 SO libspdk_bdev_ftl.so.6.0 00:03:02.799 LIB libspdk_bdev_aio.a 00:03:02.799 LIB libspdk_bdev_null.a 00:03:02.799 SYMLINK libspdk_bdev_error.so 00:03:02.799 SO libspdk_bdev_aio.so.6.0 00:03:02.799 SO libspdk_bdev_null.so.6.0 00:03:02.799 SYMLINK libspdk_bdev_split.so 00:03:02.799 LIB libspdk_bdev_malloc.a 00:03:02.799 LIB libspdk_bdev_passthru.a 00:03:02.799 SYMLINK libspdk_bdev_delay.so 00:03:02.799 LIB libspdk_bdev_iscsi.a 00:03:02.799 LIB libspdk_bdev_zone_block.a 00:03:02.799 SYMLINK libspdk_bdev_ftl.so 00:03:02.799 SO libspdk_bdev_malloc.so.6.0 00:03:02.799 SO libspdk_bdev_passthru.so.6.0 00:03:02.799 SO libspdk_bdev_iscsi.so.6.0 00:03:02.799 SO libspdk_bdev_zone_block.so.6.0 00:03:02.799 SYMLINK libspdk_bdev_aio.so 00:03:02.799 SYMLINK libspdk_bdev_null.so 00:03:02.799 SYMLINK libspdk_bdev_malloc.so 00:03:02.799 SYMLINK libspdk_bdev_passthru.so 00:03:02.799 SYMLINK libspdk_bdev_iscsi.so 00:03:02.799 SYMLINK libspdk_bdev_zone_block.so 00:03:02.799 LIB libspdk_bdev_lvol.a 00:03:03.057 LIB libspdk_bdev_virtio.a 00:03:03.057 SO libspdk_bdev_lvol.so.6.0 00:03:03.057 SO libspdk_bdev_virtio.so.6.0 00:03:03.057 SYMLINK libspdk_bdev_lvol.so 00:03:03.057 SYMLINK libspdk_bdev_virtio.so 00:03:03.316 LIB libspdk_bdev_raid.a 00:03:03.316 SO libspdk_bdev_raid.so.6.0 00:03:03.574 SYMLINK libspdk_bdev_raid.so 00:03:04.508 LIB libspdk_bdev_nvme.a 00:03:04.508 SO libspdk_bdev_nvme.so.7.0 00:03:04.766 SYMLINK libspdk_bdev_nvme.so 00:03:05.024 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:05.024 CC module/event/subsystems/iobuf/iobuf.o 00:03:05.024 CC module/event/subsystems/scheduler/scheduler.o 00:03:05.024 CC module/event/subsystems/sock/sock.o 00:03:05.024 CC module/event/subsystems/keyring/keyring.o 00:03:05.024 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:05.024 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:05.024 CC module/event/subsystems/vmd/vmd.o 00:03:05.024 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:05.283 LIB libspdk_event_keyring.a 00:03:05.283 LIB libspdk_event_vhost_blk.a 00:03:05.283 LIB libspdk_event_scheduler.a 00:03:05.283 LIB libspdk_event_vfu_tgt.a 00:03:05.283 LIB libspdk_event_vmd.a 00:03:05.283 LIB libspdk_event_sock.a 00:03:05.283 SO libspdk_event_keyring.so.1.0 00:03:05.283 SO libspdk_event_vhost_blk.so.3.0 00:03:05.283 LIB libspdk_event_iobuf.a 00:03:05.283 SO libspdk_event_vfu_tgt.so.3.0 00:03:05.283 SO libspdk_event_scheduler.so.4.0 00:03:05.283 SO libspdk_event_sock.so.5.0 00:03:05.283 SO libspdk_event_vmd.so.6.0 00:03:05.283 SO libspdk_event_iobuf.so.3.0 00:03:05.283 SYMLINK libspdk_event_keyring.so 00:03:05.283 SYMLINK libspdk_event_vhost_blk.so 00:03:05.283 SYMLINK libspdk_event_vfu_tgt.so 00:03:05.283 SYMLINK libspdk_event_scheduler.so 00:03:05.283 SYMLINK libspdk_event_sock.so 00:03:05.283 SYMLINK libspdk_event_vmd.so 00:03:05.283 SYMLINK libspdk_event_iobuf.so 00:03:05.541 CC module/event/subsystems/accel/accel.o 00:03:05.541 LIB libspdk_event_accel.a 00:03:05.541 SO libspdk_event_accel.so.6.0 00:03:05.799 SYMLINK libspdk_event_accel.so 00:03:05.799 CC module/event/subsystems/bdev/bdev.o 00:03:06.058 LIB libspdk_event_bdev.a 00:03:06.058 SO libspdk_event_bdev.so.6.0 00:03:06.058 SYMLINK libspdk_event_bdev.so 00:03:06.316 CC module/event/subsystems/nbd/nbd.o 00:03:06.316 CC module/event/subsystems/scsi/scsi.o 00:03:06.316 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:06.316 CC module/event/subsystems/ublk/ublk.o 00:03:06.316 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:06.316 LIB libspdk_event_nbd.a 00:03:06.316 LIB libspdk_event_ublk.a 00:03:06.574 LIB libspdk_event_scsi.a 00:03:06.574 SO libspdk_event_nbd.so.6.0 00:03:06.574 SO libspdk_event_ublk.so.3.0 00:03:06.574 SO libspdk_event_scsi.so.6.0 00:03:06.574 SYMLINK libspdk_event_nbd.so 00:03:06.574 SYMLINK libspdk_event_ublk.so 00:03:06.574 LIB libspdk_event_nvmf.a 00:03:06.574 SYMLINK libspdk_event_scsi.so 00:03:06.574 SO libspdk_event_nvmf.so.6.0 00:03:06.574 SYMLINK libspdk_event_nvmf.so 00:03:06.574 CC module/event/subsystems/iscsi/iscsi.o 00:03:06.574 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:06.834 LIB libspdk_event_vhost_scsi.a 00:03:06.834 LIB libspdk_event_iscsi.a 00:03:06.834 SO libspdk_event_vhost_scsi.so.3.0 00:03:06.834 SO libspdk_event_iscsi.so.6.0 00:03:06.834 SYMLINK libspdk_event_vhost_scsi.so 00:03:06.834 SYMLINK libspdk_event_iscsi.so 00:03:07.093 SO libspdk.so.6.0 00:03:07.093 SYMLINK libspdk.so 00:03:07.093 CC app/trace_record/trace_record.o 00:03:07.353 CC app/spdk_top/spdk_top.o 00:03:07.353 TEST_HEADER include/spdk/accel.h 00:03:07.353 CC app/spdk_lspci/spdk_lspci.o 00:03:07.353 TEST_HEADER include/spdk/accel_module.h 00:03:07.353 TEST_HEADER include/spdk/assert.h 00:03:07.353 CC app/spdk_nvme_discover/discovery_aer.o 00:03:07.353 CC app/spdk_nvme_identify/identify.o 00:03:07.353 TEST_HEADER include/spdk/base64.h 00:03:07.353 TEST_HEADER include/spdk/barrier.h 00:03:07.353 TEST_HEADER include/spdk/bdev.h 00:03:07.353 TEST_HEADER include/spdk/bdev_module.h 00:03:07.353 TEST_HEADER include/spdk/bdev_zone.h 00:03:07.353 CC test/rpc_client/rpc_client_test.o 00:03:07.353 CC app/spdk_nvme_perf/perf.o 00:03:07.353 TEST_HEADER include/spdk/bit_array.h 00:03:07.353 TEST_HEADER include/spdk/bit_pool.h 00:03:07.353 TEST_HEADER include/spdk/blob_bdev.h 00:03:07.353 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:07.353 CXX app/trace/trace.o 00:03:07.353 TEST_HEADER include/spdk/blobfs.h 00:03:07.353 TEST_HEADER include/spdk/blob.h 00:03:07.353 TEST_HEADER include/spdk/conf.h 00:03:07.353 TEST_HEADER include/spdk/config.h 00:03:07.353 TEST_HEADER include/spdk/cpuset.h 00:03:07.353 TEST_HEADER include/spdk/crc16.h 00:03:07.353 TEST_HEADER include/spdk/crc32.h 00:03:07.353 TEST_HEADER include/spdk/crc64.h 00:03:07.353 TEST_HEADER include/spdk/dif.h 00:03:07.353 TEST_HEADER include/spdk/dma.h 00:03:07.353 TEST_HEADER include/spdk/endian.h 00:03:07.353 TEST_HEADER include/spdk/env_dpdk.h 00:03:07.353 TEST_HEADER include/spdk/env.h 00:03:07.353 TEST_HEADER include/spdk/event.h 00:03:07.353 TEST_HEADER include/spdk/fd_group.h 00:03:07.353 TEST_HEADER include/spdk/fd.h 00:03:07.353 TEST_HEADER include/spdk/file.h 00:03:07.353 TEST_HEADER include/spdk/ftl.h 00:03:07.353 TEST_HEADER include/spdk/gpt_spec.h 00:03:07.353 TEST_HEADER include/spdk/hexlify.h 00:03:07.353 TEST_HEADER include/spdk/histogram_data.h 00:03:07.353 TEST_HEADER include/spdk/idxd.h 00:03:07.353 TEST_HEADER include/spdk/idxd_spec.h 00:03:07.353 TEST_HEADER include/spdk/init.h 00:03:07.353 TEST_HEADER include/spdk/ioat.h 00:03:07.353 TEST_HEADER include/spdk/ioat_spec.h 00:03:07.353 TEST_HEADER include/spdk/iscsi_spec.h 00:03:07.353 TEST_HEADER include/spdk/json.h 00:03:07.353 TEST_HEADER include/spdk/jsonrpc.h 00:03:07.353 TEST_HEADER include/spdk/keyring.h 00:03:07.353 TEST_HEADER include/spdk/keyring_module.h 00:03:07.353 TEST_HEADER include/spdk/likely.h 00:03:07.353 TEST_HEADER include/spdk/log.h 00:03:07.353 TEST_HEADER include/spdk/lvol.h 00:03:07.353 TEST_HEADER include/spdk/memory.h 00:03:07.353 TEST_HEADER include/spdk/mmio.h 00:03:07.353 TEST_HEADER include/spdk/nbd.h 00:03:07.353 TEST_HEADER include/spdk/net.h 00:03:07.353 TEST_HEADER include/spdk/notify.h 00:03:07.353 TEST_HEADER include/spdk/nvme.h 00:03:07.353 TEST_HEADER include/spdk/nvme_intel.h 00:03:07.353 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:07.353 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:07.353 TEST_HEADER include/spdk/nvme_spec.h 00:03:07.353 TEST_HEADER include/spdk/nvme_zns.h 00:03:07.353 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:07.353 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:07.353 TEST_HEADER include/spdk/nvmf.h 00:03:07.353 TEST_HEADER include/spdk/nvmf_spec.h 00:03:07.353 TEST_HEADER include/spdk/nvmf_transport.h 00:03:07.353 TEST_HEADER include/spdk/opal.h 00:03:07.353 TEST_HEADER include/spdk/opal_spec.h 00:03:07.353 TEST_HEADER include/spdk/pci_ids.h 00:03:07.353 TEST_HEADER include/spdk/pipe.h 00:03:07.353 TEST_HEADER include/spdk/queue.h 00:03:07.353 TEST_HEADER include/spdk/reduce.h 00:03:07.353 TEST_HEADER include/spdk/rpc.h 00:03:07.353 TEST_HEADER include/spdk/scheduler.h 00:03:07.353 TEST_HEADER include/spdk/scsi.h 00:03:07.353 TEST_HEADER include/spdk/scsi_spec.h 00:03:07.353 TEST_HEADER include/spdk/stdinc.h 00:03:07.353 TEST_HEADER include/spdk/sock.h 00:03:07.353 TEST_HEADER include/spdk/string.h 00:03:07.353 TEST_HEADER include/spdk/trace.h 00:03:07.353 TEST_HEADER include/spdk/thread.h 00:03:07.353 TEST_HEADER include/spdk/tree.h 00:03:07.353 TEST_HEADER include/spdk/trace_parser.h 00:03:07.353 TEST_HEADER include/spdk/ublk.h 00:03:07.353 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:07.353 TEST_HEADER include/spdk/util.h 00:03:07.353 TEST_HEADER include/spdk/version.h 00:03:07.353 TEST_HEADER include/spdk/uuid.h 00:03:07.353 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:07.353 TEST_HEADER include/spdk/vhost.h 00:03:07.353 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:07.353 TEST_HEADER include/spdk/vmd.h 00:03:07.353 TEST_HEADER include/spdk/xor.h 00:03:07.353 TEST_HEADER include/spdk/zipf.h 00:03:07.353 CXX test/cpp_headers/accel.o 00:03:07.353 CXX test/cpp_headers/accel_module.o 00:03:07.353 CXX test/cpp_headers/assert.o 00:03:07.353 CXX test/cpp_headers/barrier.o 00:03:07.353 CXX test/cpp_headers/base64.o 00:03:07.353 CXX test/cpp_headers/bdev.o 00:03:07.353 CXX test/cpp_headers/bdev_module.o 00:03:07.353 CXX test/cpp_headers/bdev_zone.o 00:03:07.353 CXX test/cpp_headers/bit_array.o 00:03:07.353 CXX test/cpp_headers/bit_pool.o 00:03:07.353 CC app/spdk_dd/spdk_dd.o 00:03:07.353 CXX test/cpp_headers/blob_bdev.o 00:03:07.353 CXX test/cpp_headers/blobfs_bdev.o 00:03:07.353 CXX test/cpp_headers/blobfs.o 00:03:07.353 CXX test/cpp_headers/blob.o 00:03:07.353 CXX test/cpp_headers/conf.o 00:03:07.353 CXX test/cpp_headers/config.o 00:03:07.353 CXX test/cpp_headers/cpuset.o 00:03:07.353 CXX test/cpp_headers/crc16.o 00:03:07.353 CC app/iscsi_tgt/iscsi_tgt.o 00:03:07.353 CC app/nvmf_tgt/nvmf_main.o 00:03:07.354 CXX test/cpp_headers/crc32.o 00:03:07.354 CC examples/ioat/verify/verify.o 00:03:07.354 CC examples/ioat/perf/perf.o 00:03:07.354 CC examples/util/zipf/zipf.o 00:03:07.354 CC test/env/memory/memory_ut.o 00:03:07.354 CC test/app/jsoncat/jsoncat.o 00:03:07.354 CC test/env/vtophys/vtophys.o 00:03:07.354 CC test/app/histogram_perf/histogram_perf.o 00:03:07.354 CC test/env/pci/pci_ut.o 00:03:07.354 CC test/app/stub/stub.o 00:03:07.354 CC app/fio/nvme/fio_plugin.o 00:03:07.354 CC app/spdk_tgt/spdk_tgt.o 00:03:07.354 CC test/thread/poller_perf/poller_perf.o 00:03:07.354 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:07.354 CC test/dma/test_dma/test_dma.o 00:03:07.354 CC app/fio/bdev/fio_plugin.o 00:03:07.354 CC test/app/bdev_svc/bdev_svc.o 00:03:07.622 CC test/env/mem_callbacks/mem_callbacks.o 00:03:07.622 LINK spdk_lspci 00:03:07.622 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.622 LINK rpc_client_test 00:03:07.622 LINK spdk_nvme_discover 00:03:07.622 LINK jsoncat 00:03:07.622 LINK interrupt_tgt 00:03:07.622 LINK zipf 00:03:07.622 LINK poller_perf 00:03:07.622 LINK vtophys 00:03:07.622 LINK histogram_perf 00:03:07.622 CXX test/cpp_headers/crc64.o 00:03:07.622 CXX test/cpp_headers/dif.o 00:03:07.622 CXX test/cpp_headers/dma.o 00:03:07.622 CXX test/cpp_headers/endian.o 00:03:07.622 CXX test/cpp_headers/env_dpdk.o 00:03:07.622 CXX test/cpp_headers/env.o 00:03:07.622 CXX test/cpp_headers/event.o 00:03:07.885 LINK nvmf_tgt 00:03:07.885 LINK spdk_trace_record 00:03:07.885 CXX test/cpp_headers/fd_group.o 00:03:07.885 CXX test/cpp_headers/fd.o 00:03:07.885 LINK env_dpdk_post_init 00:03:07.885 CXX test/cpp_headers/file.o 00:03:07.885 LINK iscsi_tgt 00:03:07.885 CXX test/cpp_headers/ftl.o 00:03:07.885 LINK stub 00:03:07.885 CXX test/cpp_headers/gpt_spec.o 00:03:07.885 CXX test/cpp_headers/hexlify.o 00:03:07.885 LINK verify 00:03:07.885 LINK ioat_perf 00:03:07.885 CXX test/cpp_headers/histogram_data.o 00:03:07.886 CXX test/cpp_headers/idxd.o 00:03:07.886 CXX test/cpp_headers/idxd_spec.o 00:03:07.886 LINK bdev_svc 00:03:07.886 CXX test/cpp_headers/init.o 00:03:07.886 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:07.886 LINK spdk_tgt 00:03:07.886 CXX test/cpp_headers/ioat.o 00:03:07.886 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:07.886 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:07.886 CXX test/cpp_headers/ioat_spec.o 00:03:08.147 CXX test/cpp_headers/iscsi_spec.o 00:03:08.147 LINK spdk_dd 00:03:08.147 CXX test/cpp_headers/json.o 00:03:08.147 CXX test/cpp_headers/jsonrpc.o 00:03:08.147 CXX test/cpp_headers/keyring.o 00:03:08.147 CXX test/cpp_headers/keyring_module.o 00:03:08.147 CXX test/cpp_headers/likely.o 00:03:08.147 CXX test/cpp_headers/log.o 00:03:08.147 CXX test/cpp_headers/lvol.o 00:03:08.147 CXX test/cpp_headers/memory.o 00:03:08.147 CXX test/cpp_headers/mmio.o 00:03:08.147 CXX test/cpp_headers/nbd.o 00:03:08.147 LINK spdk_trace 00:03:08.147 CXX test/cpp_headers/net.o 00:03:08.147 CXX test/cpp_headers/notify.o 00:03:08.147 LINK pci_ut 00:03:08.147 CXX test/cpp_headers/nvme.o 00:03:08.147 CXX test/cpp_headers/nvme_intel.o 00:03:08.147 CXX test/cpp_headers/nvme_ocssd.o 00:03:08.147 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:08.147 CXX test/cpp_headers/nvme_spec.o 00:03:08.147 CXX test/cpp_headers/nvme_zns.o 00:03:08.147 CXX test/cpp_headers/nvmf_cmd.o 00:03:08.147 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:08.147 LINK test_dma 00:03:08.147 CXX test/cpp_headers/nvmf.o 00:03:08.147 CXX test/cpp_headers/nvmf_spec.o 00:03:08.147 CXX test/cpp_headers/nvmf_transport.o 00:03:08.147 CXX test/cpp_headers/opal.o 00:03:08.147 CXX test/cpp_headers/opal_spec.o 00:03:08.408 CXX test/cpp_headers/pci_ids.o 00:03:08.408 CC examples/sock/hello_world/hello_sock.o 00:03:08.408 CXX test/cpp_headers/pipe.o 00:03:08.408 LINK nvme_fuzz 00:03:08.408 CC examples/vmd/lsvmd/lsvmd.o 00:03:08.408 CC examples/vmd/led/led.o 00:03:08.408 CC examples/idxd/perf/perf.o 00:03:08.408 CC test/event/event_perf/event_perf.o 00:03:08.408 CC examples/thread/thread/thread_ex.o 00:03:08.408 CC test/event/reactor/reactor.o 00:03:08.408 CC test/event/reactor_perf/reactor_perf.o 00:03:08.408 CXX test/cpp_headers/queue.o 00:03:08.408 LINK spdk_bdev 00:03:08.408 LINK spdk_nvme 00:03:08.408 CXX test/cpp_headers/reduce.o 00:03:08.667 CXX test/cpp_headers/rpc.o 00:03:08.667 CXX test/cpp_headers/scheduler.o 00:03:08.667 CXX test/cpp_headers/scsi.o 00:03:08.667 CXX test/cpp_headers/sock.o 00:03:08.667 CXX test/cpp_headers/scsi_spec.o 00:03:08.667 CXX test/cpp_headers/stdinc.o 00:03:08.667 CXX test/cpp_headers/string.o 00:03:08.667 CXX test/cpp_headers/thread.o 00:03:08.667 CXX test/cpp_headers/trace.o 00:03:08.667 CXX test/cpp_headers/trace_parser.o 00:03:08.667 CXX test/cpp_headers/tree.o 00:03:08.667 CC test/event/app_repeat/app_repeat.o 00:03:08.667 CXX test/cpp_headers/ublk.o 00:03:08.667 CXX test/cpp_headers/util.o 00:03:08.667 CXX test/cpp_headers/uuid.o 00:03:08.667 CXX test/cpp_headers/version.o 00:03:08.667 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.667 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.667 CXX test/cpp_headers/vhost.o 00:03:08.667 LINK lsvmd 00:03:08.667 CC test/event/scheduler/scheduler.o 00:03:08.667 CXX test/cpp_headers/vmd.o 00:03:08.667 CXX test/cpp_headers/xor.o 00:03:08.667 CXX test/cpp_headers/zipf.o 00:03:08.667 LINK mem_callbacks 00:03:08.667 LINK led 00:03:08.667 CC app/vhost/vhost.o 00:03:08.667 LINK reactor 00:03:08.667 LINK event_perf 00:03:08.667 LINK spdk_nvme_perf 00:03:08.926 LINK reactor_perf 00:03:08.926 LINK spdk_nvme_identify 00:03:08.926 LINK vhost_fuzz 00:03:08.926 LINK hello_sock 00:03:08.926 LINK spdk_top 00:03:08.926 LINK thread 00:03:08.926 LINK app_repeat 00:03:08.926 CC test/nvme/aer/aer.o 00:03:08.926 CC test/nvme/reset/reset.o 00:03:08.926 CC test/nvme/overhead/overhead.o 00:03:08.926 CC test/nvme/startup/startup.o 00:03:08.926 CC test/nvme/err_injection/err_injection.o 00:03:08.926 CC test/nvme/e2edp/nvme_dp.o 00:03:08.926 CC test/nvme/sgl/sgl.o 00:03:08.926 CC test/accel/dif/dif.o 00:03:08.926 CC test/blobfs/mkfs/mkfs.o 00:03:08.926 CC test/nvme/reserve/reserve.o 00:03:08.926 CC test/nvme/simple_copy/simple_copy.o 00:03:09.185 CC test/nvme/connect_stress/connect_stress.o 00:03:09.185 CC test/nvme/boot_partition/boot_partition.o 00:03:09.185 CC test/nvme/compliance/nvme_compliance.o 00:03:09.185 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:09.185 CC test/nvme/fused_ordering/fused_ordering.o 00:03:09.185 CC test/lvol/esnap/esnap.o 00:03:09.185 CC test/nvme/fdp/fdp.o 00:03:09.185 CC test/nvme/cuse/cuse.o 00:03:09.185 LINK idxd_perf 00:03:09.186 LINK vhost 00:03:09.186 LINK scheduler 00:03:09.186 LINK mkfs 00:03:09.186 LINK boot_partition 00:03:09.445 LINK startup 00:03:09.445 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:09.445 CC examples/nvme/abort/abort.o 00:03:09.445 CC examples/nvme/hotplug/hotplug.o 00:03:09.445 CC examples/nvme/reconnect/reconnect.o 00:03:09.445 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:09.445 CC examples/nvme/hello_world/hello_world.o 00:03:09.445 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:09.445 LINK doorbell_aers 00:03:09.445 CC examples/nvme/arbitration/arbitration.o 00:03:09.445 LINK connect_stress 00:03:09.445 LINK err_injection 00:03:09.445 LINK aer 00:03:09.445 LINK reset 00:03:09.445 LINK reserve 00:03:09.445 LINK nvme_dp 00:03:09.445 LINK fused_ordering 00:03:09.445 LINK overhead 00:03:09.445 LINK memory_ut 00:03:09.445 LINK sgl 00:03:09.445 LINK simple_copy 00:03:09.445 CC examples/accel/perf/accel_perf.o 00:03:09.445 LINK dif 00:03:09.445 LINK nvme_compliance 00:03:09.703 LINK pmr_persistence 00:03:09.703 LINK cmb_copy 00:03:09.703 CC examples/blob/hello_world/hello_blob.o 00:03:09.703 LINK fdp 00:03:09.703 CC examples/blob/cli/blobcli.o 00:03:09.703 LINK hello_world 00:03:09.703 LINK hotplug 00:03:09.703 LINK reconnect 00:03:09.962 LINK arbitration 00:03:09.962 LINK abort 00:03:09.962 LINK hello_blob 00:03:09.962 LINK nvme_manage 00:03:09.962 LINK accel_perf 00:03:09.962 CC test/bdev/bdevio/bdevio.o 00:03:10.220 LINK blobcli 00:03:10.220 LINK iscsi_fuzz 00:03:10.477 CC examples/bdev/hello_world/hello_bdev.o 00:03:10.477 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.477 LINK bdevio 00:03:10.477 LINK cuse 00:03:10.736 LINK hello_bdev 00:03:11.302 LINK bdevperf 00:03:11.568 CC examples/nvmf/nvmf/nvmf.o 00:03:11.855 LINK nvmf 00:03:14.387 LINK esnap 00:03:14.387 00:03:14.387 real 0m42.405s 00:03:14.387 user 7m26.792s 00:03:14.387 sys 1m49.551s 00:03:14.387 05:59:07 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:14.387 05:59:07 make -- common/autotest_common.sh@10 -- $ set +x 00:03:14.387 ************************************ 00:03:14.387 END TEST make 00:03:14.387 ************************************ 00:03:14.387 05:59:07 -- common/autotest_common.sh@1142 -- $ return 0 00:03:14.387 05:59:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:14.387 05:59:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:14.387 05:59:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:14.387 05:59:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.387 05:59:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:14.387 05:59:07 -- pm/common@44 -- $ pid=1500228 00:03:14.387 05:59:07 -- pm/common@50 -- $ kill -TERM 1500228 00:03:14.387 05:59:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.387 05:59:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:14.387 05:59:07 -- pm/common@44 -- $ pid=1500230 00:03:14.387 05:59:07 -- pm/common@50 -- $ kill -TERM 1500230 00:03:14.387 05:59:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.387 05:59:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:14.387 05:59:07 -- pm/common@44 -- $ pid=1500232 00:03:14.387 05:59:07 -- pm/common@50 -- $ kill -TERM 1500232 00:03:14.387 05:59:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.387 05:59:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:14.387 05:59:07 -- pm/common@44 -- $ pid=1500262 00:03:14.387 05:59:07 -- pm/common@50 -- $ sudo -E kill -TERM 1500262 00:03:14.646 05:59:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:14.646 05:59:07 -- nvmf/common.sh@7 -- # uname -s 00:03:14.646 05:59:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:14.646 05:59:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:14.646 05:59:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:14.646 05:59:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:14.646 05:59:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:14.646 05:59:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:14.646 05:59:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:14.646 05:59:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:14.646 05:59:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:14.646 05:59:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:14.646 05:59:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:14.646 05:59:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:14.646 05:59:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:14.646 05:59:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:14.646 05:59:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:14.646 05:59:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:14.646 05:59:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:14.646 05:59:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:14.646 05:59:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:14.646 05:59:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:14.646 05:59:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.646 05:59:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.646 05:59:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.646 05:59:07 -- paths/export.sh@5 -- # export PATH 00:03:14.646 05:59:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.646 05:59:07 -- nvmf/common.sh@47 -- # : 0 00:03:14.646 05:59:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:14.646 05:59:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:14.647 05:59:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:14.647 05:59:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:14.647 05:59:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:14.647 05:59:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:14.647 05:59:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:14.647 05:59:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:14.647 05:59:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:14.647 05:59:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:14.647 05:59:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:14.647 05:59:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:14.647 05:59:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:14.647 05:59:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:14.647 05:59:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:14.647 05:59:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:14.647 05:59:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:14.647 05:59:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:14.647 05:59:07 -- spdk/autotest.sh@48 -- # udevadm_pid=1571600 00:03:14.647 05:59:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:14.647 05:59:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:14.647 05:59:07 -- pm/common@17 -- # local monitor 00:03:14.647 05:59:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.647 05:59:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.647 05:59:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.647 05:59:07 -- pm/common@21 -- # date +%s 00:03:14.647 05:59:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.647 05:59:07 -- pm/common@21 -- # date +%s 00:03:14.647 05:59:07 -- pm/common@25 -- # sleep 1 00:03:14.647 05:59:07 -- pm/common@21 -- # date +%s 00:03:14.647 05:59:07 -- pm/common@21 -- # date +%s 00:03:14.647 05:59:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721707147 00:03:14.647 05:59:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721707147 00:03:14.647 05:59:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721707147 00:03:14.647 05:59:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721707147 00:03:14.647 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721707147_collect-vmstat.pm.log 00:03:14.647 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721707147_collect-cpu-load.pm.log 00:03:14.647 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721707147_collect-cpu-temp.pm.log 00:03:14.647 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721707147_collect-bmc-pm.bmc.pm.log 00:03:15.583 05:59:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:15.583 05:59:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:15.583 05:59:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:15.583 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:03:15.583 05:59:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:15.583 05:59:08 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:15.583 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:03:15.583 05:59:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:15.583 05:59:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.583 05:59:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.583 05:59:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:15.583 05:59:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.583 05:59:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:15.583 05:59:08 -- common/autotest_common.sh@1455 -- # uname 00:03:15.583 05:59:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:15.583 05:59:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:15.583 05:59:08 -- common/autotest_common.sh@1475 -- # uname 00:03:15.583 05:59:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:15.583 05:59:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:15.583 05:59:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:15.583 05:59:08 -- spdk/autotest.sh@72 -- # hash lcov 00:03:15.583 05:59:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:15.583 05:59:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:15.583 --rc lcov_branch_coverage=1 00:03:15.583 --rc lcov_function_coverage=1 00:03:15.583 --rc genhtml_branch_coverage=1 00:03:15.583 --rc genhtml_function_coverage=1 00:03:15.583 --rc genhtml_legend=1 00:03:15.583 --rc geninfo_all_blocks=1 00:03:15.583 ' 00:03:15.583 05:59:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:15.583 --rc lcov_branch_coverage=1 00:03:15.583 --rc lcov_function_coverage=1 00:03:15.583 --rc genhtml_branch_coverage=1 00:03:15.583 --rc genhtml_function_coverage=1 00:03:15.583 --rc genhtml_legend=1 00:03:15.583 --rc geninfo_all_blocks=1 00:03:15.583 ' 00:03:15.583 05:59:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:15.583 --rc lcov_branch_coverage=1 00:03:15.583 --rc lcov_function_coverage=1 00:03:15.583 --rc genhtml_branch_coverage=1 00:03:15.583 --rc genhtml_function_coverage=1 00:03:15.583 --rc genhtml_legend=1 00:03:15.583 --rc geninfo_all_blocks=1 00:03:15.583 --no-external' 00:03:15.583 05:59:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:15.583 --rc lcov_branch_coverage=1 00:03:15.583 --rc lcov_function_coverage=1 00:03:15.583 --rc genhtml_branch_coverage=1 00:03:15.583 --rc genhtml_function_coverage=1 00:03:15.583 --rc genhtml_legend=1 00:03:15.583 --rc geninfo_all_blocks=1 00:03:15.583 --no-external' 00:03:15.583 05:59:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:15.842 lcov: LCOV version 1.14 00:03:15.842 05:59:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:28.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:28.063 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:28.064 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:28.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:28.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:28.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:28.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:28.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:28.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:28.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:28.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:28.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:28.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:28.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:28.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:28.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:42.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:42.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:49.534 05:59:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:49.534 05:59:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.534 05:59:41 -- common/autotest_common.sh@10 -- # set +x 00:03:49.534 05:59:41 -- spdk/autotest.sh@91 -- # rm -f 00:03:49.534 05:59:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.534 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:49.534 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:49.534 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:49.534 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:49.534 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:49.793 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:49.793 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:49.793 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:49.793 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:49.793 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:49.793 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:49.793 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:49.793 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:49.793 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:49.793 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:49.793 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:49.793 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:49.793 05:59:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:49.793 05:59:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:49.793 05:59:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:49.793 05:59:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:49.793 05:59:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.793 05:59:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:49.793 05:59:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:49.793 05:59:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.793 05:59:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.793 05:59:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:49.793 05:59:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:49.793 05:59:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:49.793 05:59:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:49.793 05:59:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:49.793 05:59:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:50.052 No valid GPT data, bailing 00:03:50.052 05:59:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.052 05:59:43 -- scripts/common.sh@391 -- # pt= 00:03:50.052 05:59:43 -- scripts/common.sh@392 -- # return 1 00:03:50.052 05:59:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:50.052 1+0 records in 00:03:50.052 1+0 records out 00:03:50.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536409 s, 195 MB/s 00:03:50.052 05:59:43 -- spdk/autotest.sh@118 -- # sync 00:03:50.052 05:59:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:50.052 05:59:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:50.052 05:59:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.953 05:59:44 -- spdk/autotest.sh@124 -- # uname -s 00:03:51.953 05:59:44 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:51.953 05:59:44 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.953 05:59:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.953 05:59:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.953 05:59:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.953 ************************************ 00:03:51.953 START TEST setup.sh 00:03:51.953 ************************************ 00:03:51.953 05:59:45 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.953 * Looking for test storage... 00:03:51.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.953 05:59:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:51.953 05:59:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:51.953 05:59:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:51.953 05:59:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.953 05:59:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.953 05:59:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.953 ************************************ 00:03:51.953 START TEST acl 00:03:51.953 ************************************ 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:51.953 * Looking for test storage... 00:03:51.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.953 05:59:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.953 05:59:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.953 05:59:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:51.953 05:59:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:51.953 05:59:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:51.953 05:59:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:51.953 05:59:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:51.954 05:59:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.954 05:59:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.329 05:59:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:53.329 05:59:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:53.329 05:59:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.329 05:59:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:53.329 05:59:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.329 05:59:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:54.266 Hugepages 00:03:54.266 node hugesize free / total 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 00:03:54.266 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:54.266 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:54.526 05:59:47 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:54.526 05:59:47 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.526 05:59:47 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.526 05:59:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.526 ************************************ 00:03:54.526 START TEST denied 00:03:54.526 ************************************ 00:03:54.526 05:59:47 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:54.526 05:59:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:54.526 05:59:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:54.526 05:59:47 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:54.526 05:59:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.526 05:59:47 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.903 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.903 05:59:49 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.438 00:03:58.438 real 0m3.785s 00:03:58.438 user 0m1.115s 00:03:58.438 sys 0m1.773s 00:03:58.438 05:59:51 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.438 05:59:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:58.438 ************************************ 00:03:58.438 END TEST denied 00:03:58.438 ************************************ 00:03:58.438 05:59:51 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:58.438 05:59:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:58.438 05:59:51 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.438 05:59:51 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.438 05:59:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.438 ************************************ 00:03:58.438 START TEST allowed 00:03:58.438 ************************************ 00:03:58.438 05:59:51 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:58.438 05:59:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:58.438 05:59:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:58.438 05:59:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:58.438 05:59:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.438 05:59:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.969 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.969 05:59:53 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:00.969 05:59:53 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:00.969 05:59:53 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:00.969 05:59:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.969 05:59:53 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.346 00:04:02.346 real 0m3.836s 00:04:02.346 user 0m0.982s 00:04:02.346 sys 0m1.620s 00:04:02.346 05:59:55 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.346 05:59:55 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:02.346 ************************************ 00:04:02.346 END TEST allowed 00:04:02.346 ************************************ 00:04:02.346 05:59:55 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:02.346 00:04:02.346 real 0m10.309s 00:04:02.346 user 0m3.126s 00:04:02.346 sys 0m5.120s 00:04:02.346 05:59:55 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.346 05:59:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:02.346 ************************************ 00:04:02.346 END TEST acl 00:04:02.346 ************************************ 00:04:02.346 05:59:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.346 05:59:55 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.346 05:59:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.346 05:59:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.346 05:59:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.346 ************************************ 00:04:02.346 START TEST hugepages 00:04:02.346 ************************************ 00:04:02.346 05:59:55 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.346 * Looking for test storage... 00:04:02.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.346 05:59:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41799772 kB' 'MemAvailable: 45671808 kB' 'Buffers: 6836 kB' 'Cached: 11913004 kB' 'SwapCached: 0 kB' 'Active: 8709468 kB' 'Inactive: 3682640 kB' 'Active(anon): 8313836 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476112 kB' 'Mapped: 208284 kB' 'Shmem: 7841568 kB' 'KReclaimable: 438200 kB' 'Slab: 839328 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 401128 kB' 'KernelStack: 12864 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9455584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.347 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.348 05:59:55 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:02.348 05:59:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.348 05:59:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.348 05:59:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.348 ************************************ 00:04:02.348 START TEST default_setup 00:04:02.348 ************************************ 00:04:02.348 05:59:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:02.348 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:02.348 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.349 05:59:55 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.726 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.726 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.726 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.726 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.726 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.726 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.726 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.726 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:03.726 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.726 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.726 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.726 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.726 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.726 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.726 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.726 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:04.665 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.665 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.666 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.666 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43877780 kB' 'MemAvailable: 47749864 kB' 'Buffers: 6836 kB' 'Cached: 11913100 kB' 'SwapCached: 0 kB' 'Active: 8728152 kB' 'Inactive: 3682640 kB' 'Active(anon): 8332520 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494136 kB' 'Mapped: 208412 kB' 'Shmem: 7841664 kB' 'KReclaimable: 438248 kB' 'Slab: 838912 kB' 'SReclaimable: 438248 kB' 'SUnreclaim: 400664 kB' 'KernelStack: 12832 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9476724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:04.666 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.666 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.666 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.666 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.930 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.931 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43891112 kB' 'MemAvailable: 47763196 kB' 'Buffers: 6836 kB' 'Cached: 11913104 kB' 'SwapCached: 0 kB' 'Active: 8728020 kB' 'Inactive: 3682640 kB' 'Active(anon): 8332388 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494508 kB' 'Mapped: 208352 kB' 'Shmem: 7841668 kB' 'KReclaimable: 438248 kB' 'Slab: 838892 kB' 'SReclaimable: 438248 kB' 'SUnreclaim: 400644 kB' 'KernelStack: 12848 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9477704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.932 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.933 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43885284 kB' 'MemAvailable: 47757368 kB' 'Buffers: 6836 kB' 'Cached: 11913104 kB' 'SwapCached: 0 kB' 'Active: 8730224 kB' 'Inactive: 3682640 kB' 'Active(anon): 8334592 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496252 kB' 'Mapped: 208788 kB' 'Shmem: 7841668 kB' 'KReclaimable: 438248 kB' 'Slab: 839008 kB' 'SReclaimable: 438248 kB' 'SUnreclaim: 400760 kB' 'KernelStack: 12784 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9480628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.935 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.936 nr_hugepages=1024 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.936 resv_hugepages=0 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.936 surplus_hugepages=0 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.936 anon_hugepages=0 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43882024 kB' 'MemAvailable: 47754108 kB' 'Buffers: 6836 kB' 'Cached: 11913108 kB' 'SwapCached: 0 kB' 'Active: 8733144 kB' 'Inactive: 3682640 kB' 'Active(anon): 8337512 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499204 kB' 'Mapped: 209204 kB' 'Shmem: 7841672 kB' 'KReclaimable: 438248 kB' 'Slab: 838952 kB' 'SReclaimable: 438248 kB' 'SUnreclaim: 400704 kB' 'KernelStack: 12784 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9482544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196988 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26013616 kB' 'MemUsed: 6816268 kB' 'SwapCached: 0 kB' 'Active: 3190840 kB' 'Inactive: 292152 kB' 'Active(anon): 3029484 kB' 'Inactive(anon): 0 kB' 'Active(file): 161356 kB' 'Inactive(file): 292152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3239800 kB' 'Mapped: 84952 kB' 'AnonPages: 246428 kB' 'Shmem: 2786292 kB' 'KernelStack: 6920 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 165184 kB' 'Slab: 397720 kB' 'SReclaimable: 165184 kB' 'SUnreclaim: 232536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.938 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.939 node0=1024 expecting 1024 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.939 00:04:04.939 real 0m2.522s 00:04:04.939 user 0m0.675s 00:04:04.939 sys 0m0.959s 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.939 05:59:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:04.939 ************************************ 00:04:04.939 END TEST default_setup 00:04:04.939 ************************************ 00:04:04.939 05:59:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:04.939 05:59:58 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:04.939 05:59:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.939 05:59:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.939 05:59:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.939 ************************************ 00:04:04.939 START TEST per_node_1G_alloc 00:04:04.939 ************************************ 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:04.939 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.940 05:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.875 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.875 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.875 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.875 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.875 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.875 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.875 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.875 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.875 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.875 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.875 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.875 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.875 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.875 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.875 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.875 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.875 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43876140 kB' 'MemAvailable: 47748224 kB' 'Buffers: 6836 kB' 'Cached: 11913216 kB' 'SwapCached: 0 kB' 'Active: 8727656 kB' 'Inactive: 3682640 kB' 'Active(anon): 8332024 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493480 kB' 'Mapped: 208924 kB' 'Shmem: 7841780 kB' 'KReclaimable: 438248 kB' 'Slab: 838908 kB' 'SReclaimable: 438248 kB' 'SUnreclaim: 400660 kB' 'KernelStack: 12800 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9476972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.139 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.140 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43876920 kB' 'MemAvailable: 47749004 kB' 'Buffers: 6836 kB' 'Cached: 11913220 kB' 'SwapCached: 0 kB' 'Active: 8728068 kB' 'Inactive: 3682640 kB' 'Active(anon): 8332436 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493984 kB' 'Mapped: 208440 kB' 'Shmem: 7841784 kB' 'KReclaimable: 438248 kB' 'Slab: 838940 kB' 'SReclaimable: 438248 kB' 'SUnreclaim: 400692 kB' 'KernelStack: 12848 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9476992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.141 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.142 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43879336 kB' 'MemAvailable: 47751420 kB' 'Buffers: 6836 kB' 'Cached: 11913240 kB' 'SwapCached: 0 kB' 'Active: 8728296 kB' 'Inactive: 3682640 kB' 'Active(anon): 8332664 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494212 kB' 'Mapped: 208364 kB' 'Shmem: 7841804 kB' 'KReclaimable: 438248 kB' 'Slab: 838948 kB' 'SReclaimable: 438248 kB' 'SUnreclaim: 400700 kB' 'KernelStack: 12896 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9477016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.143 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.144 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.145 nr_hugepages=1024 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.145 resv_hugepages=0 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.145 surplus_hugepages=0 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.145 anon_hugepages=0 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43879336 kB' 'MemAvailable: 47751420 kB' 'Buffers: 6836 kB' 'Cached: 11913260 kB' 'SwapCached: 0 kB' 'Active: 8727896 kB' 'Inactive: 3682640 kB' 'Active(anon): 8332264 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493788 kB' 'Mapped: 208364 kB' 'Shmem: 7841824 kB' 'KReclaimable: 438248 kB' 'Slab: 838948 kB' 'SReclaimable: 438248 kB' 'SUnreclaim: 400700 kB' 'KernelStack: 12832 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9477036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.145 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.146 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.147 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.407 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.407 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27065324 kB' 'MemUsed: 5764560 kB' 'SwapCached: 0 kB' 'Active: 3190932 kB' 'Inactive: 292152 kB' 'Active(anon): 3029576 kB' 'Inactive(anon): 0 kB' 'Active(file): 161356 kB' 'Inactive(file): 292152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3239804 kB' 'Mapped: 84812 kB' 'AnonPages: 246436 kB' 'Shmem: 2786296 kB' 'KernelStack: 6920 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 165184 kB' 'Slab: 397608 kB' 'SReclaimable: 165184 kB' 'SUnreclaim: 232424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.408 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16813764 kB' 'MemUsed: 10898060 kB' 'SwapCached: 0 kB' 'Active: 5536736 kB' 'Inactive: 3390488 kB' 'Active(anon): 5302460 kB' 'Inactive(anon): 0 kB' 'Active(file): 234276 kB' 'Inactive(file): 3390488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8680352 kB' 'Mapped: 123552 kB' 'AnonPages: 247008 kB' 'Shmem: 5055588 kB' 'KernelStack: 5912 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273064 kB' 'Slab: 441340 kB' 'SReclaimable: 273064 kB' 'SUnreclaim: 168276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.409 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.410 node0=512 expecting 512 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:06.410 node1=512 expecting 512 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.410 00:04:06.410 real 0m1.351s 00:04:06.410 user 0m0.583s 00:04:06.410 sys 0m0.729s 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.410 05:59:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.410 ************************************ 00:04:06.410 END TEST per_node_1G_alloc 00:04:06.410 ************************************ 00:04:06.410 05:59:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.410 05:59:59 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:06.410 05:59:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.410 05:59:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.410 05:59:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.410 ************************************ 00:04:06.410 START TEST even_2G_alloc 00:04:06.410 ************************************ 00:04:06.410 05:59:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:06.410 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:06.410 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.410 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.411 05:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.345 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.345 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.345 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.345 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.345 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.345 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.345 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.345 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.345 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.345 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.345 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.345 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.345 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.345 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.345 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.345 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.345 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43854016 kB' 'MemAvailable: 47726092 kB' 'Buffers: 6836 kB' 'Cached: 11913356 kB' 'SwapCached: 0 kB' 'Active: 8733672 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338040 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499524 kB' 'Mapped: 209272 kB' 'Shmem: 7841920 kB' 'KReclaimable: 438240 kB' 'Slab: 839128 kB' 'SReclaimable: 438240 kB' 'SUnreclaim: 400888 kB' 'KernelStack: 13008 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9485360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197320 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.610 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.611 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43870044 kB' 'MemAvailable: 47742120 kB' 'Buffers: 6836 kB' 'Cached: 11913360 kB' 'SwapCached: 0 kB' 'Active: 8735932 kB' 'Inactive: 3682640 kB' 'Active(anon): 8340300 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501764 kB' 'Mapped: 208812 kB' 'Shmem: 7841924 kB' 'KReclaimable: 438240 kB' 'Slab: 839116 kB' 'SReclaimable: 438240 kB' 'SUnreclaim: 400876 kB' 'KernelStack: 13024 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9487896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197276 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.612 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.613 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.614 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43878256 kB' 'MemAvailable: 47750332 kB' 'Buffers: 6836 kB' 'Cached: 11913360 kB' 'SwapCached: 0 kB' 'Active: 8730324 kB' 'Inactive: 3682640 kB' 'Active(anon): 8334692 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496244 kB' 'Mapped: 209228 kB' 'Shmem: 7841924 kB' 'KReclaimable: 438240 kB' 'Slab: 839172 kB' 'SReclaimable: 438240 kB' 'SUnreclaim: 400932 kB' 'KernelStack: 12992 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9483020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197272 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.615 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.616 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.617 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.618 nr_hugepages=1024 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.618 resv_hugepages=0 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.618 surplus_hugepages=0 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.618 anon_hugepages=0 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43879120 kB' 'MemAvailable: 47751196 kB' 'Buffers: 6836 kB' 'Cached: 11913400 kB' 'SwapCached: 0 kB' 'Active: 8733728 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338096 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499436 kB' 'Mapped: 208812 kB' 'Shmem: 7841964 kB' 'KReclaimable: 438240 kB' 'Slab: 839160 kB' 'SReclaimable: 438240 kB' 'SUnreclaim: 400920 kB' 'KernelStack: 12944 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9486344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197256 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.618 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.619 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.620 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27064856 kB' 'MemUsed: 5765028 kB' 'SwapCached: 0 kB' 'Active: 3191624 kB' 'Inactive: 292152 kB' 'Active(anon): 3030268 kB' 'Inactive(anon): 0 kB' 'Active(file): 161356 kB' 'Inactive(file): 292152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3239812 kB' 'Mapped: 84976 kB' 'AnonPages: 247040 kB' 'Shmem: 2786304 kB' 'KernelStack: 6904 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 165184 kB' 'Slab: 397816 kB' 'SReclaimable: 165184 kB' 'SUnreclaim: 232632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.621 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.622 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.900 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.900 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.901 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.902 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16812248 kB' 'MemUsed: 10899576 kB' 'SwapCached: 0 kB' 'Active: 5543828 kB' 'Inactive: 3390488 kB' 'Active(anon): 5309552 kB' 'Inactive(anon): 0 kB' 'Active(file): 234276 kB' 'Inactive(file): 3390488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8680440 kB' 'Mapped: 124220 kB' 'AnonPages: 254160 kB' 'Shmem: 5055676 kB' 'KernelStack: 6040 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273056 kB' 'Slab: 441344 kB' 'SReclaimable: 273056 kB' 'SUnreclaim: 168288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.903 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.904 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.904 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.904 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.904 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.904 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.904 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.906 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.906 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.906 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.906 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.906 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.906 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.906 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.907 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.908 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.909 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.910 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.911 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.912 node0=512 expecting 512 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:07.912 node1=512 expecting 512 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:07.912 00:04:07.912 real 0m1.409s 00:04:07.912 user 0m0.552s 00:04:07.912 sys 0m0.802s 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.912 06:00:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.912 ************************************ 00:04:07.912 END TEST even_2G_alloc 00:04:07.912 ************************************ 00:04:07.912 06:00:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:07.912 06:00:00 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:07.912 06:00:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.912 06:00:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.912 06:00:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.912 ************************************ 00:04:07.912 START TEST odd_alloc 00:04:07.912 ************************************ 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.912 06:00:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.848 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:08.848 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.848 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:08.848 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:08.848 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:08.848 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:08.848 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:08.848 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:08.848 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:08.848 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:08.848 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:08.848 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:08.848 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:08.848 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:08.848 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:08.848 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:08.848 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43838148 kB' 'MemAvailable: 47710200 kB' 'Buffers: 6836 kB' 'Cached: 11913492 kB' 'SwapCached: 0 kB' 'Active: 8733392 kB' 'Inactive: 3682640 kB' 'Active(anon): 8337760 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499004 kB' 'Mapped: 209140 kB' 'Shmem: 7842056 kB' 'KReclaimable: 438216 kB' 'Slab: 839152 kB' 'SReclaimable: 438216 kB' 'SUnreclaim: 400936 kB' 'KernelStack: 12976 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9507136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197308 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.111 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43846488 kB' 'MemAvailable: 47718540 kB' 'Buffers: 6836 kB' 'Cached: 11913496 kB' 'SwapCached: 0 kB' 'Active: 8733924 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338292 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499544 kB' 'Mapped: 209140 kB' 'Shmem: 7842060 kB' 'KReclaimable: 438216 kB' 'Slab: 839144 kB' 'SReclaimable: 438216 kB' 'SUnreclaim: 400928 kB' 'KernelStack: 12944 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9507152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197292 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.112 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43846240 kB' 'MemAvailable: 47718292 kB' 'Buffers: 6836 kB' 'Cached: 11913496 kB' 'SwapCached: 0 kB' 'Active: 8732744 kB' 'Inactive: 3682640 kB' 'Active(anon): 8337112 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498328 kB' 'Mapped: 209100 kB' 'Shmem: 7842060 kB' 'KReclaimable: 438216 kB' 'Slab: 839156 kB' 'SReclaimable: 438216 kB' 'SUnreclaim: 400940 kB' 'KernelStack: 12944 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9507172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197276 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.113 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:09.114 nr_hugepages=1025 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.114 resv_hugepages=0 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.114 surplus_hugepages=0 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.114 anon_hugepages=0 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43845484 kB' 'MemAvailable: 47717536 kB' 'Buffers: 6836 kB' 'Cached: 11913496 kB' 'SwapCached: 0 kB' 'Active: 8733180 kB' 'Inactive: 3682640 kB' 'Active(anon): 8337548 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498764 kB' 'Mapped: 209104 kB' 'Shmem: 7842060 kB' 'KReclaimable: 438216 kB' 'Slab: 839156 kB' 'SReclaimable: 438216 kB' 'SUnreclaim: 400940 kB' 'KernelStack: 12944 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9507192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197276 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.114 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27081956 kB' 'MemUsed: 5747928 kB' 'SwapCached: 0 kB' 'Active: 3188940 kB' 'Inactive: 292152 kB' 'Active(anon): 3027584 kB' 'Inactive(anon): 0 kB' 'Active(file): 161356 kB' 'Inactive(file): 292152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3239848 kB' 'Mapped: 84852 kB' 'AnonPages: 244420 kB' 'Shmem: 2786340 kB' 'KernelStack: 6856 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 165176 kB' 'Slab: 397756 kB' 'SReclaimable: 165176 kB' 'SUnreclaim: 232580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.115 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16763992 kB' 'MemUsed: 10947832 kB' 'SwapCached: 0 kB' 'Active: 5544124 kB' 'Inactive: 3390488 kB' 'Active(anon): 5309848 kB' 'Inactive(anon): 0 kB' 'Active(file): 234276 kB' 'Inactive(file): 3390488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8680488 kB' 'Mapped: 124252 kB' 'AnonPages: 254260 kB' 'Shmem: 5055724 kB' 'KernelStack: 6120 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273040 kB' 'Slab: 441404 kB' 'SReclaimable: 273040 kB' 'SUnreclaim: 168364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:09.116 node0=512 expecting 513 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:09.116 node1=513 expecting 512 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:09.116 00:04:09.116 real 0m1.337s 00:04:09.116 user 0m0.545s 00:04:09.116 sys 0m0.752s 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.116 06:00:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.116 ************************************ 00:04:09.116 END TEST odd_alloc 00:04:09.116 ************************************ 00:04:09.116 06:00:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:09.116 06:00:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:09.116 06:00:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.116 06:00:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.116 06:00:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.116 ************************************ 00:04:09.116 START TEST custom_alloc 00:04:09.116 ************************************ 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:09.116 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.117 06:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.056 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.056 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.056 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.056 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.056 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.320 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.320 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.320 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.320 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.320 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.320 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.320 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.320 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.320 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.320 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.320 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.320 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42790460 kB' 'MemAvailable: 46662512 kB' 'Buffers: 6836 kB' 'Cached: 11913628 kB' 'SwapCached: 0 kB' 'Active: 8735400 kB' 'Inactive: 3682640 kB' 'Active(anon): 8339768 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500940 kB' 'Mapped: 209204 kB' 'Shmem: 7842192 kB' 'KReclaimable: 438216 kB' 'Slab: 839076 kB' 'SReclaimable: 438216 kB' 'SUnreclaim: 400860 kB' 'KernelStack: 13488 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9509764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197420 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.320 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.321 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42790580 kB' 'MemAvailable: 46662632 kB' 'Buffers: 6836 kB' 'Cached: 11913628 kB' 'SwapCached: 0 kB' 'Active: 8735664 kB' 'Inactive: 3682640 kB' 'Active(anon): 8340032 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501184 kB' 'Mapped: 209204 kB' 'Shmem: 7842192 kB' 'KReclaimable: 438216 kB' 'Slab: 839076 kB' 'SReclaimable: 438216 kB' 'SUnreclaim: 400860 kB' 'KernelStack: 13488 kB' 'PageTables: 10120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9507416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197276 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.322 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.323 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42793140 kB' 'MemAvailable: 46665192 kB' 'Buffers: 6836 kB' 'Cached: 11913628 kB' 'SwapCached: 0 kB' 'Active: 8733832 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338200 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499348 kB' 'Mapped: 209124 kB' 'Shmem: 7842192 kB' 'KReclaimable: 438216 kB' 'Slab: 839072 kB' 'SReclaimable: 438216 kB' 'SUnreclaim: 400856 kB' 'KernelStack: 13008 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9507436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.324 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.325 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.590 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:10.591 nr_hugepages=1536 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.591 resv_hugepages=0 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.591 surplus_hugepages=0 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.591 anon_hugepages=0 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42793128 kB' 'MemAvailable: 46665180 kB' 'Buffers: 6836 kB' 'Cached: 11913648 kB' 'SwapCached: 0 kB' 'Active: 8733748 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338116 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499300 kB' 'Mapped: 209116 kB' 'Shmem: 7842212 kB' 'KReclaimable: 438216 kB' 'Slab: 839064 kB' 'SReclaimable: 438216 kB' 'SUnreclaim: 400848 kB' 'KernelStack: 12992 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9507460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.591 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.592 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27061824 kB' 'MemUsed: 5768060 kB' 'SwapCached: 0 kB' 'Active: 3189140 kB' 'Inactive: 292152 kB' 'Active(anon): 3027784 kB' 'Inactive(anon): 0 kB' 'Active(file): 161356 kB' 'Inactive(file): 292152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3240008 kB' 'Mapped: 84864 kB' 'AnonPages: 244512 kB' 'Shmem: 2786500 kB' 'KernelStack: 6840 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 165176 kB' 'Slab: 397660 kB' 'SReclaimable: 165176 kB' 'SUnreclaim: 232484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.593 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15731556 kB' 'MemUsed: 11980268 kB' 'SwapCached: 0 kB' 'Active: 5543732 kB' 'Inactive: 3390488 kB' 'Active(anon): 5309456 kB' 'Inactive(anon): 0 kB' 'Active(file): 234276 kB' 'Inactive(file): 3390488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8680492 kB' 'Mapped: 124252 kB' 'AnonPages: 253896 kB' 'Shmem: 5055728 kB' 'KernelStack: 6136 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273040 kB' 'Slab: 441404 kB' 'SReclaimable: 273040 kB' 'SUnreclaim: 168364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.594 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.595 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.596 node0=512 expecting 512 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:10.596 node1=1024 expecting 1024 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:10.596 00:04:10.596 real 0m1.339s 00:04:10.596 user 0m0.559s 00:04:10.596 sys 0m0.738s 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.596 06:00:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.596 ************************************ 00:04:10.596 END TEST custom_alloc 00:04:10.596 ************************************ 00:04:10.596 06:00:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:10.596 06:00:03 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:10.596 06:00:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.596 06:00:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.596 06:00:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.596 ************************************ 00:04:10.596 START TEST no_shrink_alloc 00:04:10.596 ************************************ 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.596 06:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:11.536 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:11.536 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:11.536 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:11.536 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:11.536 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:11.536 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:11.536 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:11.536 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:11.536 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:11.536 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:11.536 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:11.536 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:11.536 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:11.536 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:11.536 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:11.536 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:11.536 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43837548 kB' 'MemAvailable: 47709584 kB' 'Buffers: 6836 kB' 'Cached: 11913752 kB' 'SwapCached: 0 kB' 'Active: 8733860 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338228 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499244 kB' 'Mapped: 209276 kB' 'Shmem: 7842316 kB' 'KReclaimable: 438200 kB' 'Slab: 838912 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 400712 kB' 'KernelStack: 12992 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9507888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197260 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43837300 kB' 'MemAvailable: 47709336 kB' 'Buffers: 6836 kB' 'Cached: 11913756 kB' 'SwapCached: 0 kB' 'Active: 8733724 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338092 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499128 kB' 'Mapped: 209200 kB' 'Shmem: 7842320 kB' 'KReclaimable: 438200 kB' 'Slab: 838904 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 400704 kB' 'KernelStack: 13040 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9507908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.806 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43837300 kB' 'MemAvailable: 47709336 kB' 'Buffers: 6836 kB' 'Cached: 11913768 kB' 'SwapCached: 0 kB' 'Active: 8733392 kB' 'Inactive: 3682640 kB' 'Active(anon): 8337760 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498724 kB' 'Mapped: 209124 kB' 'Shmem: 7842332 kB' 'KReclaimable: 438200 kB' 'Slab: 838912 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 400712 kB' 'KernelStack: 13040 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9507928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.807 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.809 nr_hugepages=1024 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.809 resv_hugepages=0 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.809 surplus_hugepages=0 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.809 anon_hugepages=0 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.809 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43837048 kB' 'MemAvailable: 47709084 kB' 'Buffers: 6836 kB' 'Cached: 11913796 kB' 'SwapCached: 0 kB' 'Active: 8733644 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338012 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498972 kB' 'Mapped: 209124 kB' 'Shmem: 7842360 kB' 'KReclaimable: 438200 kB' 'Slab: 838912 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 400712 kB' 'KernelStack: 13056 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9507952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.810 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.811 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26007528 kB' 'MemUsed: 6822356 kB' 'SwapCached: 0 kB' 'Active: 3189824 kB' 'Inactive: 292152 kB' 'Active(anon): 3028468 kB' 'Inactive(anon): 0 kB' 'Active(file): 161356 kB' 'Inactive(file): 292152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3240156 kB' 'Mapped: 84872 kB' 'AnonPages: 245044 kB' 'Shmem: 2786648 kB' 'KernelStack: 6872 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 165160 kB' 'Slab: 397504 kB' 'SReclaimable: 165160 kB' 'SUnreclaim: 232344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.812 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.813 node0=1024 expecting 1024 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.813 06:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.196 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:13.196 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.196 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:13.196 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:13.196 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:13.196 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:13.196 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:13.196 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:13.196 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:13.196 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:13.196 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:13.196 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:13.196 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:13.196 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:13.196 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:13.196 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:13.196 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:13.196 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43824872 kB' 'MemAvailable: 47696908 kB' 'Buffers: 6836 kB' 'Cached: 11913860 kB' 'SwapCached: 0 kB' 'Active: 8734568 kB' 'Inactive: 3682640 kB' 'Active(anon): 8338936 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499716 kB' 'Mapped: 209216 kB' 'Shmem: 7842424 kB' 'KReclaimable: 438200 kB' 'Slab: 838856 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 400656 kB' 'KernelStack: 13040 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9509748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197388 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.196 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.197 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43829348 kB' 'MemAvailable: 47701384 kB' 'Buffers: 6836 kB' 'Cached: 11913864 kB' 'SwapCached: 0 kB' 'Active: 8738100 kB' 'Inactive: 3682640 kB' 'Active(anon): 8342468 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503324 kB' 'Mapped: 209132 kB' 'Shmem: 7842428 kB' 'KReclaimable: 438200 kB' 'Slab: 838856 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 400656 kB' 'KernelStack: 13104 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9512668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197324 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.198 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.199 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43827456 kB' 'MemAvailable: 47699492 kB' 'Buffers: 6836 kB' 'Cached: 11913884 kB' 'SwapCached: 0 kB' 'Active: 8739488 kB' 'Inactive: 3682640 kB' 'Active(anon): 8343856 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504628 kB' 'Mapped: 209140 kB' 'Shmem: 7842448 kB' 'KReclaimable: 438200 kB' 'Slab: 838880 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 400680 kB' 'KernelStack: 13040 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9514288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197312 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.200 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.201 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.202 nr_hugepages=1024 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.202 resv_hugepages=0 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.202 surplus_hugepages=0 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.202 anon_hugepages=0 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43827456 kB' 'MemAvailable: 47699492 kB' 'Buffers: 6836 kB' 'Cached: 11913904 kB' 'SwapCached: 0 kB' 'Active: 8739672 kB' 'Inactive: 3682640 kB' 'Active(anon): 8344040 kB' 'Inactive(anon): 0 kB' 'Active(file): 395632 kB' 'Inactive(file): 3682640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504780 kB' 'Mapped: 209140 kB' 'Shmem: 7842468 kB' 'KReclaimable: 438200 kB' 'Slab: 838880 kB' 'SReclaimable: 438200 kB' 'SUnreclaim: 400680 kB' 'KernelStack: 13072 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9514308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197312 kB' 'VmallocChunk: 0 kB' 'Percpu: 41472 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2420316 kB' 'DirectMap2M: 20568064 kB' 'DirectMap1G: 46137344 kB' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.202 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.203 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26005504 kB' 'MemUsed: 6824380 kB' 'SwapCached: 0 kB' 'Active: 3191996 kB' 'Inactive: 292152 kB' 'Active(anon): 3030640 kB' 'Inactive(anon): 0 kB' 'Active(file): 161356 kB' 'Inactive(file): 292152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3240200 kB' 'Mapped: 84876 kB' 'AnonPages: 247036 kB' 'Shmem: 2786692 kB' 'KernelStack: 6840 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 165160 kB' 'Slab: 397488 kB' 'SReclaimable: 165160 kB' 'SUnreclaim: 232328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.204 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.205 node0=1024 expecting 1024 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.205 00:04:13.205 real 0m2.721s 00:04:13.205 user 0m1.127s 00:04:13.205 sys 0m1.489s 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.205 06:00:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.205 ************************************ 00:04:13.205 END TEST no_shrink_alloc 00:04:13.205 ************************************ 00:04:13.205 06:00:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.205 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:13.205 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:13.205 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.205 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.205 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.205 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.205 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.463 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.464 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.464 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.464 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.464 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.464 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:13.464 06:00:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:13.464 00:04:13.464 real 0m11.071s 00:04:13.464 user 0m4.213s 00:04:13.464 sys 0m5.709s 00:04:13.464 06:00:06 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.464 06:00:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.464 ************************************ 00:04:13.464 END TEST hugepages 00:04:13.464 ************************************ 00:04:13.464 06:00:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:13.464 06:00:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:13.464 06:00:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.464 06:00:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.464 06:00:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.464 ************************************ 00:04:13.464 START TEST driver 00:04:13.464 ************************************ 00:04:13.464 06:00:06 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:13.464 * Looking for test storage... 00:04:13.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:13.464 06:00:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:13.464 06:00:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.464 06:00:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.005 06:00:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:16.005 06:00:09 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.005 06:00:09 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.005 06:00:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.005 ************************************ 00:04:16.005 START TEST guess_driver 00:04:16.005 ************************************ 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:16.005 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:16.005 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:16.005 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:16.005 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:16.005 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:16.005 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:16.005 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:16.005 Looking for driver=vfio-pci 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.005 06:00:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.382 06:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.323 06:00:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.323 06:00:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.323 06:00:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.323 06:00:11 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:18.323 06:00:11 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:18.323 06:00:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.323 06:00:11 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.857 00:04:20.857 real 0m4.851s 00:04:20.857 user 0m1.131s 00:04:20.857 sys 0m1.848s 00:04:20.857 06:00:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.857 06:00:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.857 ************************************ 00:04:20.857 END TEST guess_driver 00:04:20.857 ************************************ 00:04:20.857 06:00:13 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:20.857 00:04:20.857 real 0m7.356s 00:04:20.857 user 0m1.672s 00:04:20.857 sys 0m2.864s 00:04:20.857 06:00:13 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.857 06:00:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.857 ************************************ 00:04:20.857 END TEST driver 00:04:20.857 ************************************ 00:04:20.857 06:00:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:20.857 06:00:13 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:20.857 06:00:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.858 06:00:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.858 06:00:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.858 ************************************ 00:04:20.858 START TEST devices 00:04:20.858 ************************************ 00:04:20.858 06:00:13 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:20.858 * Looking for test storage... 00:04:20.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:20.858 06:00:14 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:20.858 06:00:14 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:20.858 06:00:14 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.858 06:00:14 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:22.237 06:00:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:22.237 06:00:15 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:22.237 No valid GPT data, bailing 00:04:22.237 06:00:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.237 06:00:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.237 06:00:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:22.237 06:00:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:22.237 06:00:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:22.237 06:00:15 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:22.237 06:00:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.237 06:00:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.237 ************************************ 00:04:22.237 START TEST nvme_mount 00:04:22.237 ************************************ 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:22.237 06:00:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:23.620 Creating new GPT entries in memory. 00:04:23.620 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:23.620 other utilities. 00:04:23.620 06:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:23.620 06:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.620 06:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.620 06:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.620 06:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:24.559 Creating new GPT entries in memory. 00:04:24.559 The operation has completed successfully. 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1592063 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.559 06:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.493 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.751 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.751 06:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.009 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:26.009 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:26.009 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.009 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.009 06:00:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.383 06:00:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.321 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.322 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.582 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.582 00:04:28.582 real 0m6.249s 00:04:28.582 user 0m1.477s 00:04:28.582 sys 0m2.354s 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.582 06:00:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:28.582 ************************************ 00:04:28.582 END TEST nvme_mount 00:04:28.582 ************************************ 00:04:28.582 06:00:21 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:28.582 06:00:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:28.582 06:00:21 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.582 06:00:21 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.582 06:00:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:28.582 ************************************ 00:04:28.582 START TEST dm_mount 00:04:28.582 ************************************ 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:28.582 06:00:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:29.522 Creating new GPT entries in memory. 00:04:29.522 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:29.522 other utilities. 00:04:29.522 06:00:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:29.522 06:00:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.522 06:00:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.522 06:00:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.522 06:00:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:30.918 Creating new GPT entries in memory. 00:04:30.918 The operation has completed successfully. 00:04:30.918 06:00:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:30.918 06:00:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.918 06:00:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.918 06:00:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.918 06:00:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:31.858 The operation has completed successfully. 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1594454 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.858 06:00:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.858 06:00:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:32.831 06:00:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.831 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.831 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:32.831 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.831 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.831 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.831 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.832 06:00:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.225 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:34.226 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:34.226 00:04:34.226 real 0m5.523s 00:04:34.226 user 0m0.911s 00:04:34.226 sys 0m1.471s 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.226 06:00:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 ************************************ 00:04:34.226 END TEST dm_mount 00:04:34.226 ************************************ 00:04:34.226 06:00:27 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:34.226 06:00:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:34.226 06:00:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:34.226 06:00:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.226 06:00:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.226 06:00:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:34.226 06:00:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.226 06:00:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.484 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:34.484 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:34.484 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.484 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:34.484 06:00:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:34.484 06:00:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.484 06:00:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.484 06:00:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.484 06:00:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.484 06:00:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.484 06:00:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:34.484 00:04:34.484 real 0m13.674s 00:04:34.484 user 0m3.023s 00:04:34.484 sys 0m4.856s 00:04:34.484 06:00:27 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.484 06:00:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:34.484 ************************************ 00:04:34.484 END TEST devices 00:04:34.484 ************************************ 00:04:34.484 06:00:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:34.484 00:04:34.484 real 0m42.660s 00:04:34.484 user 0m12.127s 00:04:34.484 sys 0m18.719s 00:04:34.484 06:00:27 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.484 06:00:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.484 ************************************ 00:04:34.484 END TEST setup.sh 00:04:34.484 ************************************ 00:04:34.485 06:00:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.485 06:00:27 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:35.859 Hugepages 00:04:35.859 node hugesize free / total 00:04:35.859 node0 1048576kB 0 / 0 00:04:35.859 node0 2048kB 2048 / 2048 00:04:35.859 node1 1048576kB 0 / 0 00:04:35.859 node1 2048kB 0 / 0 00:04:35.859 00:04:35.859 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.859 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:35.859 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:35.859 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:35.859 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:35.859 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:35.859 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:35.859 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:35.859 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:35.859 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:35.859 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:35.859 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:35.859 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:35.859 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:35.859 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:35.859 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:35.859 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:35.859 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:35.859 06:00:28 -- spdk/autotest.sh@130 -- # uname -s 00:04:35.859 06:00:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:35.859 06:00:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:35.859 06:00:28 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.796 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:36.796 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:36.796 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:36.796 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:36.796 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:36.796 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:36.796 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:36.796 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:36.796 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:36.796 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:36.796 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:36.796 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:36.796 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.055 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.055 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.055 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:37.991 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:37.991 06:00:31 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:38.929 06:00:32 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:38.929 06:00:32 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:38.929 06:00:32 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:38.929 06:00:32 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:38.929 06:00:32 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:38.929 06:00:32 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:38.930 06:00:32 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.930 06:00:32 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:38.930 06:00:32 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:38.930 06:00:32 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:38.930 06:00:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:38.930 06:00:32 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.319 Waiting for block devices as requested 00:04:40.319 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:40.319 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:40.319 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:40.577 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:40.577 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:40.577 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:40.577 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:40.836 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:40.836 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:40.836 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:40.836 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:41.096 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:41.096 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:41.096 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:41.353 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:41.353 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:41.353 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:41.611 06:00:34 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:41.611 06:00:34 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:41.611 06:00:34 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:41.611 06:00:34 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:41.611 06:00:34 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:41.611 06:00:34 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:41.611 06:00:34 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:41.611 06:00:34 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:41.611 06:00:34 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:41.611 06:00:34 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:41.611 06:00:34 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:41.611 06:00:34 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:41.611 06:00:34 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:41.611 06:00:34 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:41.611 06:00:34 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:41.611 06:00:34 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:41.611 06:00:34 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:41.611 06:00:34 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:41.611 06:00:34 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:41.611 06:00:34 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:41.611 06:00:34 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:41.611 06:00:34 -- common/autotest_common.sh@1557 -- # continue 00:04:41.611 06:00:34 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:41.611 06:00:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.611 06:00:34 -- common/autotest_common.sh@10 -- # set +x 00:04:41.611 06:00:34 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:41.611 06:00:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.611 06:00:34 -- common/autotest_common.sh@10 -- # set +x 00:04:41.611 06:00:34 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.548 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.548 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.548 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.807 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.807 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.807 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.807 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.807 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:42.807 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.807 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.807 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.807 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.807 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.807 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.807 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.807 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:43.742 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.742 06:00:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:43.742 06:00:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.742 06:00:37 -- common/autotest_common.sh@10 -- # set +x 00:04:43.742 06:00:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:43.742 06:00:37 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:43.742 06:00:37 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.742 06:00:37 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:43.742 06:00:37 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:43.742 06:00:37 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:43.742 06:00:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:43.742 06:00:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:43.742 06:00:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.742 06:00:37 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.742 06:00:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:44.007 06:00:37 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:44.007 06:00:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:44.007 06:00:37 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:44.007 06:00:37 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:44.007 06:00:37 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:44.007 06:00:37 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:44.008 06:00:37 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:44.008 06:00:37 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:44.008 06:00:37 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:44.008 06:00:37 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1599634 00:04:44.008 06:00:37 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.008 06:00:37 -- common/autotest_common.sh@1598 -- # waitforlisten 1599634 00:04:44.008 06:00:37 -- common/autotest_common.sh@829 -- # '[' -z 1599634 ']' 00:04:44.008 06:00:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.008 06:00:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.008 06:00:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.008 06:00:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.008 06:00:37 -- common/autotest_common.sh@10 -- # set +x 00:04:44.008 [2024-07-23 06:00:37.189073] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:04:44.008 [2024-07-23 06:00:37.189169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599634 ] 00:04:44.008 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.008 [2024-07-23 06:00:37.221002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:44.008 [2024-07-23 06:00:37.252433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.008 [2024-07-23 06:00:37.341699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.271 06:00:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.271 06:00:37 -- common/autotest_common.sh@862 -- # return 0 00:04:44.271 06:00:37 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:44.271 06:00:37 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:44.271 06:00:37 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:47.580 nvme0n1 00:04:47.580 06:00:40 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:47.580 [2024-07-23 06:00:40.892343] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:47.580 [2024-07-23 06:00:40.892390] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:47.580 request: 00:04:47.580 { 00:04:47.580 "nvme_ctrlr_name": "nvme0", 00:04:47.580 "password": "test", 00:04:47.580 "method": "bdev_nvme_opal_revert", 00:04:47.580 "req_id": 1 00:04:47.580 } 00:04:47.580 Got JSON-RPC error response 00:04:47.580 response: 00:04:47.580 { 00:04:47.580 "code": -32603, 00:04:47.580 "message": "Internal error" 00:04:47.580 } 00:04:47.580 06:00:40 -- common/autotest_common.sh@1604 -- # true 00:04:47.580 06:00:40 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:47.580 06:00:40 -- common/autotest_common.sh@1608 -- # killprocess 1599634 00:04:47.580 06:00:40 -- common/autotest_common.sh@948 -- # '[' -z 1599634 ']' 00:04:47.580 06:00:40 -- common/autotest_common.sh@952 -- # kill -0 1599634 00:04:47.580 06:00:40 -- common/autotest_common.sh@953 -- # uname 00:04:47.580 06:00:40 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.580 06:00:40 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1599634 00:04:47.839 06:00:40 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.839 06:00:40 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.839 06:00:40 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1599634' 00:04:47.839 killing process with pid 1599634 00:04:47.839 06:00:40 -- common/autotest_common.sh@967 -- # kill 1599634 00:04:47.839 06:00:40 -- common/autotest_common.sh@972 -- # wait 1599634 00:04:49.743 06:00:42 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:49.743 06:00:42 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:49.743 06:00:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.743 06:00:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.743 06:00:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:49.743 06:00:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.743 06:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:49.743 06:00:42 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:49.743 06:00:42 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:49.743 06:00:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.743 06:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.743 06:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:49.743 ************************************ 00:04:49.743 START TEST env 00:04:49.743 ************************************ 00:04:49.743 06:00:42 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:49.743 * Looking for test storage... 00:04:49.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:49.743 06:00:42 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.743 06:00:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.743 06:00:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.743 06:00:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.743 ************************************ 00:04:49.743 START TEST env_memory 00:04:49.743 ************************************ 00:04:49.743 06:00:42 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.743 00:04:49.743 00:04:49.743 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.743 http://cunit.sourceforge.net/ 00:04:49.743 00:04:49.743 00:04:49.743 Suite: memory 00:04:49.743 Test: alloc and free memory map ...[2024-07-23 06:00:42.830442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.743 passed 00:04:49.743 Test: mem map translation ...[2024-07-23 06:00:42.850749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.744 [2024-07-23 06:00:42.850770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.744 [2024-07-23 06:00:42.850831] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.744 [2024-07-23 06:00:42.850844] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.744 passed 00:04:49.744 Test: mem map registration ...[2024-07-23 06:00:42.894952] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:49.744 [2024-07-23 06:00:42.894971] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:49.744 passed 00:04:49.744 Test: mem map adjacent registrations ...passed 00:04:49.744 00:04:49.744 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.744 suites 1 1 n/a 0 0 00:04:49.744 tests 4 4 4 0 0 00:04:49.744 asserts 152 152 152 0 n/a 00:04:49.744 00:04:49.744 Elapsed time = 0.148 seconds 00:04:49.744 00:04:49.744 real 0m0.155s 00:04:49.744 user 0m0.146s 00:04:49.744 sys 0m0.008s 00:04:49.744 06:00:42 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.744 06:00:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.744 ************************************ 00:04:49.744 END TEST env_memory 00:04:49.744 ************************************ 00:04:49.744 06:00:42 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.744 06:00:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.744 06:00:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.744 06:00:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.744 06:00:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.744 ************************************ 00:04:49.744 START TEST env_vtophys 00:04:49.744 ************************************ 00:04:49.744 06:00:42 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.744 EAL: lib.eal log level changed from notice to debug 00:04:49.744 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.744 EAL: Detected lcore 1 as core 1 on socket 0 00:04:49.744 EAL: Detected lcore 2 as core 2 on socket 0 00:04:49.744 EAL: Detected lcore 3 as core 3 on socket 0 00:04:49.744 EAL: Detected lcore 4 as core 4 on socket 0 00:04:49.744 EAL: Detected lcore 5 as core 5 on socket 0 00:04:49.744 EAL: Detected lcore 6 as core 8 on socket 0 00:04:49.744 EAL: Detected lcore 7 as core 9 on socket 0 00:04:49.744 EAL: Detected lcore 8 as core 10 on socket 0 00:04:49.744 EAL: Detected lcore 9 as core 11 on socket 0 00:04:49.744 EAL: Detected lcore 10 as core 12 on socket 0 00:04:49.744 EAL: Detected lcore 11 as core 13 on socket 0 00:04:49.744 EAL: Detected lcore 12 as core 0 on socket 1 00:04:49.744 EAL: Detected lcore 13 as core 1 on socket 1 00:04:49.744 EAL: Detected lcore 14 as core 2 on socket 1 00:04:49.744 EAL: Detected lcore 15 as core 3 on socket 1 00:04:49.744 EAL: Detected lcore 16 as core 4 on socket 1 00:04:49.744 EAL: Detected lcore 17 as core 5 on socket 1 00:04:49.744 EAL: Detected lcore 18 as core 8 on socket 1 00:04:49.744 EAL: Detected lcore 19 as core 9 on socket 1 00:04:49.744 EAL: Detected lcore 20 as core 10 on socket 1 00:04:49.744 EAL: Detected lcore 21 as core 11 on socket 1 00:04:49.744 EAL: Detected lcore 22 as core 12 on socket 1 00:04:49.744 EAL: Detected lcore 23 as core 13 on socket 1 00:04:49.744 EAL: Detected lcore 24 as core 0 on socket 0 00:04:49.744 EAL: Detected lcore 25 as core 1 on socket 0 00:04:49.744 EAL: Detected lcore 26 as core 2 on socket 0 00:04:49.744 EAL: Detected lcore 27 as core 3 on socket 0 00:04:49.744 EAL: Detected lcore 28 as core 4 on socket 0 00:04:49.744 EAL: Detected lcore 29 as core 5 on socket 0 00:04:49.744 EAL: Detected lcore 30 as core 8 on socket 0 00:04:49.744 EAL: Detected lcore 31 as core 9 on socket 0 00:04:49.744 EAL: Detected lcore 32 as core 10 on socket 0 00:04:49.744 EAL: Detected lcore 33 as core 11 on socket 0 00:04:49.744 EAL: Detected lcore 34 as core 12 on socket 0 00:04:49.744 EAL: Detected lcore 35 as core 13 on socket 0 00:04:49.744 EAL: Detected lcore 36 as core 0 on socket 1 00:04:49.744 EAL: Detected lcore 37 as core 1 on socket 1 00:04:49.744 EAL: Detected lcore 38 as core 2 on socket 1 00:04:49.744 EAL: Detected lcore 39 as core 3 on socket 1 00:04:49.744 EAL: Detected lcore 40 as core 4 on socket 1 00:04:49.744 EAL: Detected lcore 41 as core 5 on socket 1 00:04:49.744 EAL: Detected lcore 42 as core 8 on socket 1 00:04:49.744 EAL: Detected lcore 43 as core 9 on socket 1 00:04:49.744 EAL: Detected lcore 44 as core 10 on socket 1 00:04:49.744 EAL: Detected lcore 45 as core 11 on socket 1 00:04:49.744 EAL: Detected lcore 46 as core 12 on socket 1 00:04:49.744 EAL: Detected lcore 47 as core 13 on socket 1 00:04:49.744 EAL: Maximum logical cores by configuration: 128 00:04:49.744 EAL: Detected CPU lcores: 48 00:04:49.744 EAL: Detected NUMA nodes: 2 00:04:49.744 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:49.744 EAL: Detected shared linkage of DPDK 00:04:49.744 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:49.744 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:49.744 EAL: Registered [vdev] bus. 00:04:49.744 EAL: bus.vdev log level changed from disabled to notice 00:04:49.744 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:49.744 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:49.744 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:49.744 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:49.744 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:49.744 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:49.744 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:49.744 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:49.744 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.744 EAL: No shared files mode enabled, IPC is disabled 00:04:49.744 EAL: Bus pci wants IOVA as 'DC' 00:04:49.744 EAL: Bus vdev wants IOVA as 'DC' 00:04:49.744 EAL: Buses did not request a specific IOVA mode. 00:04:49.744 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:49.744 EAL: Selected IOVA mode 'VA' 00:04:49.744 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.744 EAL: Probing VFIO support... 00:04:49.744 EAL: IOMMU type 1 (Type 1) is supported 00:04:49.744 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:49.744 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:49.744 EAL: VFIO support initialized 00:04:49.744 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.744 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.744 EAL: Setting up physically contiguous memory... 00:04:49.744 EAL: Setting maximum number of open files to 524288 00:04:49.744 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.744 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:49.744 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.744 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.744 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.744 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.744 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.744 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.744 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.744 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.744 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.744 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.744 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.744 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.744 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.744 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.744 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.744 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.744 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.744 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.744 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.744 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.744 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.744 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.744 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.744 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.744 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.744 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:49.744 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.744 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:49.744 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.744 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.744 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:49.744 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:49.744 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.744 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:49.744 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.744 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.744 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:49.744 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:49.744 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.744 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:49.744 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.744 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.744 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:49.744 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:49.744 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.745 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:49.745 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.745 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.745 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:49.745 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:49.745 EAL: Hugepages will be freed exactly as allocated. 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: TSC frequency is ~2700000 KHz 00:04:49.745 EAL: Main lcore 0 is ready (tid=7efd8f04ba00;cpuset=[0]) 00:04:49.745 EAL: Trying to obtain current memory policy. 00:04:49.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.745 EAL: Restoring previous memory policy: 0 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.745 00:04:49.745 00:04:49.745 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.745 http://cunit.sourceforge.net/ 00:04:49.745 00:04:49.745 00:04:49.745 Suite: components_suite 00:04:49.745 Test: vtophys_malloc_test ...passed 00:04:49.745 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.745 EAL: Restoring previous memory policy: 4 00:04:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.745 EAL: Trying to obtain current memory policy. 00:04:49.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.745 EAL: Restoring previous memory policy: 4 00:04:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.745 EAL: Trying to obtain current memory policy. 00:04:49.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.745 EAL: Restoring previous memory policy: 4 00:04:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.745 EAL: Trying to obtain current memory policy. 00:04:49.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.745 EAL: Restoring previous memory policy: 4 00:04:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.745 EAL: request: mp_malloc_sync 00:04:49.745 EAL: No shared files mode enabled, IPC is disabled 00:04:49.745 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.745 EAL: Trying to obtain current memory policy. 00:04:49.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.005 EAL: Restoring previous memory policy: 4 00:04:50.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.005 EAL: request: mp_malloc_sync 00:04:50.005 EAL: No shared files mode enabled, IPC is disabled 00:04:50.005 EAL: Heap on socket 0 was expanded by 34MB 00:04:50.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.005 EAL: request: mp_malloc_sync 00:04:50.005 EAL: No shared files mode enabled, IPC is disabled 00:04:50.005 EAL: Heap on socket 0 was shrunk by 34MB 00:04:50.005 EAL: Trying to obtain current memory policy. 00:04:50.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.005 EAL: Restoring previous memory policy: 4 00:04:50.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.005 EAL: request: mp_malloc_sync 00:04:50.005 EAL: No shared files mode enabled, IPC is disabled 00:04:50.005 EAL: Heap on socket 0 was expanded by 66MB 00:04:50.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.005 EAL: request: mp_malloc_sync 00:04:50.005 EAL: No shared files mode enabled, IPC is disabled 00:04:50.005 EAL: Heap on socket 0 was shrunk by 66MB 00:04:50.005 EAL: Trying to obtain current memory policy. 00:04:50.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.005 EAL: Restoring previous memory policy: 4 00:04:50.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.005 EAL: request: mp_malloc_sync 00:04:50.005 EAL: No shared files mode enabled, IPC is disabled 00:04:50.005 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.005 EAL: request: mp_malloc_sync 00:04:50.005 EAL: No shared files mode enabled, IPC is disabled 00:04:50.005 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.005 EAL: Trying to obtain current memory policy. 00:04:50.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.005 EAL: Restoring previous memory policy: 4 00:04:50.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.005 EAL: request: mp_malloc_sync 00:04:50.005 EAL: No shared files mode enabled, IPC is disabled 00:04:50.005 EAL: Heap on socket 0 was expanded by 258MB 00:04:50.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.264 EAL: request: mp_malloc_sync 00:04:50.264 EAL: No shared files mode enabled, IPC is disabled 00:04:50.264 EAL: Heap on socket 0 was shrunk by 258MB 00:04:50.264 EAL: Trying to obtain current memory policy. 00:04:50.264 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.264 EAL: Restoring previous memory policy: 4 00:04:50.264 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.264 EAL: request: mp_malloc_sync 00:04:50.264 EAL: No shared files mode enabled, IPC is disabled 00:04:50.264 EAL: Heap on socket 0 was expanded by 514MB 00:04:50.525 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.525 EAL: request: mp_malloc_sync 00:04:50.525 EAL: No shared files mode enabled, IPC is disabled 00:04:50.525 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.525 EAL: Trying to obtain current memory policy. 00:04:50.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.784 EAL: Restoring previous memory policy: 4 00:04:50.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.784 EAL: request: mp_malloc_sync 00:04:50.784 EAL: No shared files mode enabled, IPC is disabled 00:04:50.784 EAL: Heap on socket 0 was expanded by 1026MB 00:04:51.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.302 EAL: request: mp_malloc_sync 00:04:51.302 EAL: No shared files mode enabled, IPC is disabled 00:04:51.302 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:51.302 passed 00:04:51.302 00:04:51.302 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.302 suites 1 1 n/a 0 0 00:04:51.302 tests 2 2 2 0 0 00:04:51.302 asserts 497 497 497 0 n/a 00:04:51.302 00:04:51.302 Elapsed time = 1.352 seconds 00:04:51.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.302 EAL: request: mp_malloc_sync 00:04:51.302 EAL: No shared files mode enabled, IPC is disabled 00:04:51.302 EAL: Heap on socket 0 was shrunk by 2MB 00:04:51.302 EAL: No shared files mode enabled, IPC is disabled 00:04:51.302 EAL: No shared files mode enabled, IPC is disabled 00:04:51.302 EAL: No shared files mode enabled, IPC is disabled 00:04:51.302 00:04:51.302 real 0m1.469s 00:04:51.302 user 0m0.833s 00:04:51.302 sys 0m0.601s 00:04:51.302 06:00:44 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.302 06:00:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:51.302 ************************************ 00:04:51.302 END TEST env_vtophys 00:04:51.302 ************************************ 00:04:51.302 06:00:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:51.302 06:00:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:51.302 06:00:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.302 06:00:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.302 06:00:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.302 ************************************ 00:04:51.302 START TEST env_pci 00:04:51.302 ************************************ 00:04:51.302 06:00:44 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:51.302 00:04:51.302 00:04:51.302 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.302 http://cunit.sourceforge.net/ 00:04:51.302 00:04:51.302 00:04:51.302 Suite: pci 00:04:51.302 Test: pci_hook ...[2024-07-23 06:00:44.518050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1600523 has claimed it 00:04:51.302 EAL: Cannot find device (10000:00:01.0) 00:04:51.302 EAL: Failed to attach device on primary process 00:04:51.302 passed 00:04:51.302 00:04:51.302 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.302 suites 1 1 n/a 0 0 00:04:51.302 tests 1 1 1 0 0 00:04:51.302 asserts 25 25 25 0 n/a 00:04:51.302 00:04:51.302 Elapsed time = 0.021 seconds 00:04:51.302 00:04:51.302 real 0m0.033s 00:04:51.302 user 0m0.009s 00:04:51.302 sys 0m0.024s 00:04:51.302 06:00:44 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.302 06:00:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:51.302 ************************************ 00:04:51.302 END TEST env_pci 00:04:51.302 ************************************ 00:04:51.302 06:00:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:51.302 06:00:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:51.302 06:00:44 env -- env/env.sh@15 -- # uname 00:04:51.302 06:00:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:51.302 06:00:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:51.302 06:00:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.302 06:00:44 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:51.302 06:00:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.302 06:00:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.302 ************************************ 00:04:51.302 START TEST env_dpdk_post_init 00:04:51.302 ************************************ 00:04:51.302 06:00:44 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.302 EAL: Detected CPU lcores: 48 00:04:51.302 EAL: Detected NUMA nodes: 2 00:04:51.302 EAL: Detected shared linkage of DPDK 00:04:51.302 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.302 EAL: Selected IOVA mode 'VA' 00:04:51.302 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.302 EAL: VFIO support initialized 00:04:51.561 EAL: Using IOMMU type 1 (Type 1) 00:04:55.779 Starting DPDK initialization... 00:04:55.779 Starting SPDK post initialization... 00:04:55.779 SPDK NVMe probe 00:04:55.779 Attaching to 0000:88:00.0 00:04:55.779 Attached to 0000:88:00.0 00:04:55.779 Cleaning up... 00:04:55.779 00:04:55.779 real 0m4.415s 00:04:55.779 user 0m3.284s 00:04:55.779 sys 0m0.193s 00:04:55.779 06:00:48 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.779 06:00:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 ************************************ 00:04:55.779 END TEST env_dpdk_post_init 00:04:55.779 ************************************ 00:04:55.779 06:00:49 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.779 06:00:49 env -- env/env.sh@26 -- # uname 00:04:55.779 06:00:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.779 06:00:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.779 06:00:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.779 06:00:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.779 06:00:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 ************************************ 00:04:55.779 START TEST env_mem_callbacks 00:04:55.779 ************************************ 00:04:55.779 06:00:49 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.779 EAL: Detected CPU lcores: 48 00:04:55.779 EAL: Detected NUMA nodes: 2 00:04:55.779 EAL: Detected shared linkage of DPDK 00:04:55.779 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.779 EAL: Selected IOVA mode 'VA' 00:04:55.779 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.779 EAL: VFIO support initialized 00:04:55.779 00:04:55.779 00:04:55.779 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.779 http://cunit.sourceforge.net/ 00:04:55.779 00:04:55.779 00:04:55.779 Suite: memory 00:04:55.779 Test: test ... 00:04:55.779 register 0x200000200000 2097152 00:04:55.779 malloc 3145728 00:04:55.779 register 0x200000400000 4194304 00:04:55.779 buf 0x200000500000 len 3145728 PASSED 00:04:55.779 malloc 64 00:04:55.779 buf 0x2000004fff40 len 64 PASSED 00:04:55.779 malloc 4194304 00:04:55.779 register 0x200000800000 6291456 00:04:55.779 buf 0x200000a00000 len 4194304 PASSED 00:04:55.779 free 0x200000500000 3145728 00:04:55.779 free 0x2000004fff40 64 00:04:55.779 unregister 0x200000400000 4194304 PASSED 00:04:55.779 free 0x200000a00000 4194304 00:04:55.779 unregister 0x200000800000 6291456 PASSED 00:04:55.779 malloc 8388608 00:04:55.779 register 0x200000400000 10485760 00:04:55.780 buf 0x200000600000 len 8388608 PASSED 00:04:55.780 free 0x200000600000 8388608 00:04:55.780 unregister 0x200000400000 10485760 PASSED 00:04:55.780 passed 00:04:55.780 00:04:55.780 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.780 suites 1 1 n/a 0 0 00:04:55.780 tests 1 1 1 0 0 00:04:55.780 asserts 15 15 15 0 n/a 00:04:55.780 00:04:55.780 Elapsed time = 0.005 seconds 00:04:55.780 00:04:55.780 real 0m0.048s 00:04:55.780 user 0m0.011s 00:04:55.780 sys 0m0.037s 00:04:55.780 06:00:49 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.780 06:00:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 ************************************ 00:04:55.780 END TEST env_mem_callbacks 00:04:55.780 ************************************ 00:04:55.780 06:00:49 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.780 00:04:55.780 real 0m6.396s 00:04:55.780 user 0m4.401s 00:04:55.780 sys 0m1.039s 00:04:55.780 06:00:49 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.780 06:00:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 ************************************ 00:04:55.780 END TEST env 00:04:55.780 ************************************ 00:04:56.038 06:00:49 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.038 06:00:49 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.038 06:00:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.038 06:00:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.038 06:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:56.038 ************************************ 00:04:56.038 START TEST rpc 00:04:56.038 ************************************ 00:04:56.038 06:00:49 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.038 * Looking for test storage... 00:04:56.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.038 06:00:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1601184 00:04:56.038 06:00:49 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:56.038 06:00:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.038 06:00:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1601184 00:04:56.038 06:00:49 rpc -- common/autotest_common.sh@829 -- # '[' -z 1601184 ']' 00:04:56.038 06:00:49 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.038 06:00:49 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.038 06:00:49 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.038 06:00:49 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.038 06:00:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.038 [2024-07-23 06:00:49.271287] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:04:56.038 [2024-07-23 06:00:49.271386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601184 ] 00:04:56.038 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.038 [2024-07-23 06:00:49.304472] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.038 [2024-07-23 06:00:49.336500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.298 [2024-07-23 06:00:49.427708] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:56.298 [2024-07-23 06:00:49.427776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1601184' to capture a snapshot of events at runtime. 00:04:56.298 [2024-07-23 06:00:49.427793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:56.298 [2024-07-23 06:00:49.427808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:56.298 [2024-07-23 06:00:49.427820] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1601184 for offline analysis/debug. 00:04:56.298 [2024-07-23 06:00:49.427872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.560 06:00:49 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.561 06:00:49 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:56.561 06:00:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.561 06:00:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.561 06:00:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:56.561 06:00:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:56.561 06:00:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.561 06:00:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.561 06:00:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.561 ************************************ 00:04:56.561 START TEST rpc_integrity 00:04:56.561 ************************************ 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.561 { 00:04:56.561 "name": "Malloc0", 00:04:56.561 "aliases": [ 00:04:56.561 "4f368e8d-eed9-4396-8f40-0b3721132927" 00:04:56.561 ], 00:04:56.561 "product_name": "Malloc disk", 00:04:56.561 "block_size": 512, 00:04:56.561 "num_blocks": 16384, 00:04:56.561 "uuid": "4f368e8d-eed9-4396-8f40-0b3721132927", 00:04:56.561 "assigned_rate_limits": { 00:04:56.561 "rw_ios_per_sec": 0, 00:04:56.561 "rw_mbytes_per_sec": 0, 00:04:56.561 "r_mbytes_per_sec": 0, 00:04:56.561 "w_mbytes_per_sec": 0 00:04:56.561 }, 00:04:56.561 "claimed": false, 00:04:56.561 "zoned": false, 00:04:56.561 "supported_io_types": { 00:04:56.561 "read": true, 00:04:56.561 "write": true, 00:04:56.561 "unmap": true, 00:04:56.561 "flush": true, 00:04:56.561 "reset": true, 00:04:56.561 "nvme_admin": false, 00:04:56.561 "nvme_io": false, 00:04:56.561 "nvme_io_md": false, 00:04:56.561 "write_zeroes": true, 00:04:56.561 "zcopy": true, 00:04:56.561 "get_zone_info": false, 00:04:56.561 "zone_management": false, 00:04:56.561 "zone_append": false, 00:04:56.561 "compare": false, 00:04:56.561 "compare_and_write": false, 00:04:56.561 "abort": true, 00:04:56.561 "seek_hole": false, 00:04:56.561 "seek_data": false, 00:04:56.561 "copy": true, 00:04:56.561 "nvme_iov_md": false 00:04:56.561 }, 00:04:56.561 "memory_domains": [ 00:04:56.561 { 00:04:56.561 "dma_device_id": "system", 00:04:56.561 "dma_device_type": 1 00:04:56.561 }, 00:04:56.561 { 00:04:56.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.561 "dma_device_type": 2 00:04:56.561 } 00:04:56.561 ], 00:04:56.561 "driver_specific": {} 00:04:56.561 } 00:04:56.561 ]' 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.561 [2024-07-23 06:00:49.825543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:56.561 [2024-07-23 06:00:49.825588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.561 [2024-07-23 06:00:49.825636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bb37f0 00:04:56.561 [2024-07-23 06:00:49.825651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.561 [2024-07-23 06:00:49.827135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.561 [2024-07-23 06:00:49.827162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.561 Passthru0 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.561 { 00:04:56.561 "name": "Malloc0", 00:04:56.561 "aliases": [ 00:04:56.561 "4f368e8d-eed9-4396-8f40-0b3721132927" 00:04:56.561 ], 00:04:56.561 "product_name": "Malloc disk", 00:04:56.561 "block_size": 512, 00:04:56.561 "num_blocks": 16384, 00:04:56.561 "uuid": "4f368e8d-eed9-4396-8f40-0b3721132927", 00:04:56.561 "assigned_rate_limits": { 00:04:56.561 "rw_ios_per_sec": 0, 00:04:56.561 "rw_mbytes_per_sec": 0, 00:04:56.561 "r_mbytes_per_sec": 0, 00:04:56.561 "w_mbytes_per_sec": 0 00:04:56.561 }, 00:04:56.561 "claimed": true, 00:04:56.561 "claim_type": "exclusive_write", 00:04:56.561 "zoned": false, 00:04:56.561 "supported_io_types": { 00:04:56.561 "read": true, 00:04:56.561 "write": true, 00:04:56.561 "unmap": true, 00:04:56.561 "flush": true, 00:04:56.561 "reset": true, 00:04:56.561 "nvme_admin": false, 00:04:56.561 "nvme_io": false, 00:04:56.561 "nvme_io_md": false, 00:04:56.561 "write_zeroes": true, 00:04:56.561 "zcopy": true, 00:04:56.561 "get_zone_info": false, 00:04:56.561 "zone_management": false, 00:04:56.561 "zone_append": false, 00:04:56.561 "compare": false, 00:04:56.561 "compare_and_write": false, 00:04:56.561 "abort": true, 00:04:56.561 "seek_hole": false, 00:04:56.561 "seek_data": false, 00:04:56.561 "copy": true, 00:04:56.561 "nvme_iov_md": false 00:04:56.561 }, 00:04:56.561 "memory_domains": [ 00:04:56.561 { 00:04:56.561 "dma_device_id": "system", 00:04:56.561 "dma_device_type": 1 00:04:56.561 }, 00:04:56.561 { 00:04:56.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.561 "dma_device_type": 2 00:04:56.561 } 00:04:56.561 ], 00:04:56.561 "driver_specific": {} 00:04:56.561 }, 00:04:56.561 { 00:04:56.561 "name": "Passthru0", 00:04:56.561 "aliases": [ 00:04:56.561 "3df289b1-7862-53b7-b6de-894841ff26f7" 00:04:56.561 ], 00:04:56.561 "product_name": "passthru", 00:04:56.561 "block_size": 512, 00:04:56.561 "num_blocks": 16384, 00:04:56.561 "uuid": "3df289b1-7862-53b7-b6de-894841ff26f7", 00:04:56.561 "assigned_rate_limits": { 00:04:56.561 "rw_ios_per_sec": 0, 00:04:56.561 "rw_mbytes_per_sec": 0, 00:04:56.561 "r_mbytes_per_sec": 0, 00:04:56.561 "w_mbytes_per_sec": 0 00:04:56.561 }, 00:04:56.561 "claimed": false, 00:04:56.561 "zoned": false, 00:04:56.561 "supported_io_types": { 00:04:56.561 "read": true, 00:04:56.561 "write": true, 00:04:56.561 "unmap": true, 00:04:56.561 "flush": true, 00:04:56.561 "reset": true, 00:04:56.561 "nvme_admin": false, 00:04:56.561 "nvme_io": false, 00:04:56.561 "nvme_io_md": false, 00:04:56.561 "write_zeroes": true, 00:04:56.561 "zcopy": true, 00:04:56.561 "get_zone_info": false, 00:04:56.561 "zone_management": false, 00:04:56.561 "zone_append": false, 00:04:56.561 "compare": false, 00:04:56.561 "compare_and_write": false, 00:04:56.561 "abort": true, 00:04:56.561 "seek_hole": false, 00:04:56.561 "seek_data": false, 00:04:56.561 "copy": true, 00:04:56.561 "nvme_iov_md": false 00:04:56.561 }, 00:04:56.561 "memory_domains": [ 00:04:56.561 { 00:04:56.561 "dma_device_id": "system", 00:04:56.561 "dma_device_type": 1 00:04:56.561 }, 00:04:56.561 { 00:04:56.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.561 "dma_device_type": 2 00:04:56.561 } 00:04:56.561 ], 00:04:56.561 "driver_specific": { 00:04:56.561 "passthru": { 00:04:56.561 "name": "Passthru0", 00:04:56.561 "base_bdev_name": "Malloc0" 00:04:56.561 } 00:04:56.561 } 00:04:56.561 } 00:04:56.561 ]' 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.561 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.561 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.829 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.829 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.829 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.829 06:00:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.829 00:04:56.829 real 0m0.227s 00:04:56.829 user 0m0.147s 00:04:56.829 sys 0m0.027s 00:04:56.829 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.829 06:00:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.829 ************************************ 00:04:56.829 END TEST rpc_integrity 00:04:56.829 ************************************ 00:04:56.829 06:00:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:56.829 06:00:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:56.829 06:00:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.829 06:00:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.829 06:00:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.829 ************************************ 00:04:56.829 START TEST rpc_plugins 00:04:56.829 ************************************ 00:04:56.829 06:00:49 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:56.829 06:00:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:56.829 06:00:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.829 06:00:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:56.829 { 00:04:56.829 "name": "Malloc1", 00:04:56.829 "aliases": [ 00:04:56.829 "df91e2b7-3ac4-4039-acd8-fe6b8321cf15" 00:04:56.829 ], 00:04:56.829 "product_name": "Malloc disk", 00:04:56.829 "block_size": 4096, 00:04:56.829 "num_blocks": 256, 00:04:56.829 "uuid": "df91e2b7-3ac4-4039-acd8-fe6b8321cf15", 00:04:56.829 "assigned_rate_limits": { 00:04:56.829 "rw_ios_per_sec": 0, 00:04:56.829 "rw_mbytes_per_sec": 0, 00:04:56.829 "r_mbytes_per_sec": 0, 00:04:56.829 "w_mbytes_per_sec": 0 00:04:56.829 }, 00:04:56.829 "claimed": false, 00:04:56.829 "zoned": false, 00:04:56.829 "supported_io_types": { 00:04:56.829 "read": true, 00:04:56.829 "write": true, 00:04:56.829 "unmap": true, 00:04:56.829 "flush": true, 00:04:56.829 "reset": true, 00:04:56.829 "nvme_admin": false, 00:04:56.829 "nvme_io": false, 00:04:56.829 "nvme_io_md": false, 00:04:56.829 "write_zeroes": true, 00:04:56.829 "zcopy": true, 00:04:56.829 "get_zone_info": false, 00:04:56.829 "zone_management": false, 00:04:56.829 "zone_append": false, 00:04:56.829 "compare": false, 00:04:56.829 "compare_and_write": false, 00:04:56.829 "abort": true, 00:04:56.829 "seek_hole": false, 00:04:56.829 "seek_data": false, 00:04:56.829 "copy": true, 00:04:56.829 "nvme_iov_md": false 00:04:56.829 }, 00:04:56.829 "memory_domains": [ 00:04:56.829 { 00:04:56.829 "dma_device_id": "system", 00:04:56.829 "dma_device_type": 1 00:04:56.829 }, 00:04:56.829 { 00:04:56.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.829 "dma_device_type": 2 00:04:56.829 } 00:04:56.829 ], 00:04:56.829 "driver_specific": {} 00:04:56.829 } 00:04:56.829 ]' 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.829 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:56.829 06:00:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:56.830 00:04:56.830 real 0m0.112s 00:04:56.830 user 0m0.076s 00:04:56.830 sys 0m0.008s 00:04:56.830 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.830 06:00:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.830 ************************************ 00:04:56.830 END TEST rpc_plugins 00:04:56.830 ************************************ 00:04:56.830 06:00:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:56.830 06:00:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:56.830 06:00:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.830 06:00:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.830 06:00:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.830 ************************************ 00:04:56.830 START TEST rpc_trace_cmd_test 00:04:56.830 ************************************ 00:04:56.830 06:00:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:56.830 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:56.830 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:56.830 06:00:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.830 06:00:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.830 06:00:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.830 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:56.830 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1601184", 00:04:56.830 "tpoint_group_mask": "0x8", 00:04:56.830 "iscsi_conn": { 00:04:56.830 "mask": "0x2", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "scsi": { 00:04:56.830 "mask": "0x4", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "bdev": { 00:04:56.830 "mask": "0x8", 00:04:56.830 "tpoint_mask": "0xffffffffffffffff" 00:04:56.830 }, 00:04:56.830 "nvmf_rdma": { 00:04:56.830 "mask": "0x10", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "nvmf_tcp": { 00:04:56.830 "mask": "0x20", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "ftl": { 00:04:56.830 "mask": "0x40", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "blobfs": { 00:04:56.830 "mask": "0x80", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "dsa": { 00:04:56.830 "mask": "0x200", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "thread": { 00:04:56.830 "mask": "0x400", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "nvme_pcie": { 00:04:56.830 "mask": "0x800", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "iaa": { 00:04:56.830 "mask": "0x1000", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "nvme_tcp": { 00:04:56.830 "mask": "0x2000", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "bdev_nvme": { 00:04:56.830 "mask": "0x4000", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 }, 00:04:56.830 "sock": { 00:04:56.830 "mask": "0x8000", 00:04:56.830 "tpoint_mask": "0x0" 00:04:56.830 } 00:04:56.830 }' 00:04:56.830 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:57.089 00:04:57.089 real 0m0.187s 00:04:57.089 user 0m0.166s 00:04:57.089 sys 0m0.016s 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.089 06:00:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.089 ************************************ 00:04:57.089 END TEST rpc_trace_cmd_test 00:04:57.089 ************************************ 00:04:57.089 06:00:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.089 06:00:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:57.089 06:00:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:57.089 06:00:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:57.089 06:00:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.089 06:00:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.089 06:00:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.089 ************************************ 00:04:57.089 START TEST rpc_daemon_integrity 00:04:57.089 ************************************ 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.089 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.349 { 00:04:57.349 "name": "Malloc2", 00:04:57.349 "aliases": [ 00:04:57.349 "48a4a627-0abf-48a0-8e6b-c1b77517e4f1" 00:04:57.349 ], 00:04:57.349 "product_name": "Malloc disk", 00:04:57.349 "block_size": 512, 00:04:57.349 "num_blocks": 16384, 00:04:57.349 "uuid": "48a4a627-0abf-48a0-8e6b-c1b77517e4f1", 00:04:57.349 "assigned_rate_limits": { 00:04:57.349 "rw_ios_per_sec": 0, 00:04:57.349 "rw_mbytes_per_sec": 0, 00:04:57.349 "r_mbytes_per_sec": 0, 00:04:57.349 "w_mbytes_per_sec": 0 00:04:57.349 }, 00:04:57.349 "claimed": false, 00:04:57.349 "zoned": false, 00:04:57.349 "supported_io_types": { 00:04:57.349 "read": true, 00:04:57.349 "write": true, 00:04:57.349 "unmap": true, 00:04:57.349 "flush": true, 00:04:57.349 "reset": true, 00:04:57.349 "nvme_admin": false, 00:04:57.349 "nvme_io": false, 00:04:57.349 "nvme_io_md": false, 00:04:57.349 "write_zeroes": true, 00:04:57.349 "zcopy": true, 00:04:57.349 "get_zone_info": false, 00:04:57.349 "zone_management": false, 00:04:57.349 "zone_append": false, 00:04:57.349 "compare": false, 00:04:57.349 "compare_and_write": false, 00:04:57.349 "abort": true, 00:04:57.349 "seek_hole": false, 00:04:57.349 "seek_data": false, 00:04:57.349 "copy": true, 00:04:57.349 "nvme_iov_md": false 00:04:57.349 }, 00:04:57.349 "memory_domains": [ 00:04:57.349 { 00:04:57.349 "dma_device_id": "system", 00:04:57.349 "dma_device_type": 1 00:04:57.349 }, 00:04:57.349 { 00:04:57.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.349 "dma_device_type": 2 00:04:57.349 } 00:04:57.349 ], 00:04:57.349 "driver_specific": {} 00:04:57.349 } 00:04:57.349 ]' 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.349 [2024-07-23 06:00:50.491715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:57.349 [2024-07-23 06:00:50.491757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.349 [2024-07-23 06:00:50.491779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d57490 00:04:57.349 [2024-07-23 06:00:50.491793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.349 [2024-07-23 06:00:50.492978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.349 [2024-07-23 06:00:50.493003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.349 Passthru0 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.349 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.349 { 00:04:57.349 "name": "Malloc2", 00:04:57.349 "aliases": [ 00:04:57.349 "48a4a627-0abf-48a0-8e6b-c1b77517e4f1" 00:04:57.349 ], 00:04:57.349 "product_name": "Malloc disk", 00:04:57.349 "block_size": 512, 00:04:57.349 "num_blocks": 16384, 00:04:57.349 "uuid": "48a4a627-0abf-48a0-8e6b-c1b77517e4f1", 00:04:57.349 "assigned_rate_limits": { 00:04:57.349 "rw_ios_per_sec": 0, 00:04:57.349 "rw_mbytes_per_sec": 0, 00:04:57.349 "r_mbytes_per_sec": 0, 00:04:57.349 "w_mbytes_per_sec": 0 00:04:57.349 }, 00:04:57.349 "claimed": true, 00:04:57.349 "claim_type": "exclusive_write", 00:04:57.349 "zoned": false, 00:04:57.349 "supported_io_types": { 00:04:57.349 "read": true, 00:04:57.349 "write": true, 00:04:57.349 "unmap": true, 00:04:57.349 "flush": true, 00:04:57.349 "reset": true, 00:04:57.349 "nvme_admin": false, 00:04:57.349 "nvme_io": false, 00:04:57.349 "nvme_io_md": false, 00:04:57.349 "write_zeroes": true, 00:04:57.349 "zcopy": true, 00:04:57.349 "get_zone_info": false, 00:04:57.349 "zone_management": false, 00:04:57.349 "zone_append": false, 00:04:57.349 "compare": false, 00:04:57.349 "compare_and_write": false, 00:04:57.349 "abort": true, 00:04:57.349 "seek_hole": false, 00:04:57.349 "seek_data": false, 00:04:57.349 "copy": true, 00:04:57.349 "nvme_iov_md": false 00:04:57.349 }, 00:04:57.349 "memory_domains": [ 00:04:57.349 { 00:04:57.349 "dma_device_id": "system", 00:04:57.349 "dma_device_type": 1 00:04:57.349 }, 00:04:57.349 { 00:04:57.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.349 "dma_device_type": 2 00:04:57.349 } 00:04:57.349 ], 00:04:57.349 "driver_specific": {} 00:04:57.349 }, 00:04:57.349 { 00:04:57.349 "name": "Passthru0", 00:04:57.349 "aliases": [ 00:04:57.349 "7463cdb7-307f-56c7-b501-2fb37862f7f2" 00:04:57.349 ], 00:04:57.349 "product_name": "passthru", 00:04:57.349 "block_size": 512, 00:04:57.349 "num_blocks": 16384, 00:04:57.350 "uuid": "7463cdb7-307f-56c7-b501-2fb37862f7f2", 00:04:57.350 "assigned_rate_limits": { 00:04:57.350 "rw_ios_per_sec": 0, 00:04:57.350 "rw_mbytes_per_sec": 0, 00:04:57.350 "r_mbytes_per_sec": 0, 00:04:57.350 "w_mbytes_per_sec": 0 00:04:57.350 }, 00:04:57.350 "claimed": false, 00:04:57.350 "zoned": false, 00:04:57.350 "supported_io_types": { 00:04:57.350 "read": true, 00:04:57.350 "write": true, 00:04:57.350 "unmap": true, 00:04:57.350 "flush": true, 00:04:57.350 "reset": true, 00:04:57.350 "nvme_admin": false, 00:04:57.350 "nvme_io": false, 00:04:57.350 "nvme_io_md": false, 00:04:57.350 "write_zeroes": true, 00:04:57.350 "zcopy": true, 00:04:57.350 "get_zone_info": false, 00:04:57.350 "zone_management": false, 00:04:57.350 "zone_append": false, 00:04:57.350 "compare": false, 00:04:57.350 "compare_and_write": false, 00:04:57.350 "abort": true, 00:04:57.350 "seek_hole": false, 00:04:57.350 "seek_data": false, 00:04:57.350 "copy": true, 00:04:57.350 "nvme_iov_md": false 00:04:57.350 }, 00:04:57.350 "memory_domains": [ 00:04:57.350 { 00:04:57.350 "dma_device_id": "system", 00:04:57.350 "dma_device_type": 1 00:04:57.350 }, 00:04:57.350 { 00:04:57.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.350 "dma_device_type": 2 00:04:57.350 } 00:04:57.350 ], 00:04:57.350 "driver_specific": { 00:04:57.350 "passthru": { 00:04:57.350 "name": "Passthru0", 00:04:57.350 "base_bdev_name": "Malloc2" 00:04:57.350 } 00:04:57.350 } 00:04:57.350 } 00:04:57.350 ]' 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.350 00:04:57.350 real 0m0.231s 00:04:57.350 user 0m0.150s 00:04:57.350 sys 0m0.021s 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.350 06:00:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.350 ************************************ 00:04:57.350 END TEST rpc_daemon_integrity 00:04:57.350 ************************************ 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.350 06:00:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:57.350 06:00:50 rpc -- rpc/rpc.sh@84 -- # killprocess 1601184 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@948 -- # '[' -z 1601184 ']' 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@952 -- # kill -0 1601184 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@953 -- # uname 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1601184 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1601184' 00:04:57.350 killing process with pid 1601184 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@967 -- # kill 1601184 00:04:57.350 06:00:50 rpc -- common/autotest_common.sh@972 -- # wait 1601184 00:04:57.918 00:04:57.918 real 0m1.890s 00:04:57.918 user 0m2.371s 00:04:57.918 sys 0m0.586s 00:04:57.918 06:00:51 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.918 06:00:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.918 ************************************ 00:04:57.918 END TEST rpc 00:04:57.918 ************************************ 00:04:57.918 06:00:51 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.918 06:00:51 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:57.918 06:00:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.918 06:00:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.918 06:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:57.918 ************************************ 00:04:57.918 START TEST skip_rpc 00:04:57.918 ************************************ 00:04:57.918 06:00:51 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:57.918 * Looking for test storage... 00:04:57.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.918 06:00:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.918 06:00:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.918 06:00:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:57.918 06:00:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.918 06:00:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.918 06:00:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.918 ************************************ 00:04:57.918 START TEST skip_rpc 00:04:57.918 ************************************ 00:04:57.918 06:00:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:57.918 06:00:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1601614 00:04:57.918 06:00:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:57.918 06:00:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.918 06:00:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:57.918 [2024-07-23 06:00:51.237130] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:04:57.918 [2024-07-23 06:00:51.237205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601614 ] 00:04:58.176 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.176 [2024-07-23 06:00:51.268083] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:58.176 [2024-07-23 06:00:51.298291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.176 [2024-07-23 06:00:51.388750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1601614 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1601614 ']' 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1601614 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1601614 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1601614' 00:05:03.465 killing process with pid 1601614 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1601614 00:05:03.465 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1601614 00:05:03.465 00:05:03.465 real 0m5.451s 00:05:03.466 user 0m5.138s 00:05:03.466 sys 0m0.310s 00:05:03.466 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.466 06:00:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.466 ************************************ 00:05:03.466 END TEST skip_rpc 00:05:03.466 ************************************ 00:05:03.466 06:00:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.466 06:00:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:03.466 06:00:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.466 06:00:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.466 06:00:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.466 ************************************ 00:05:03.466 START TEST skip_rpc_with_json 00:05:03.466 ************************************ 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1602301 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1602301 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1602301 ']' 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.466 06:00:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.466 [2024-07-23 06:00:56.742144] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:03.466 [2024-07-23 06:00:56.742229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602301 ] 00:05:03.466 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.466 [2024-07-23 06:00:56.772751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:03.466 [2024-07-23 06:00:56.804311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.723 [2024-07-23 06:00:56.893456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.984 [2024-07-23 06:00:57.149544] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:03.984 request: 00:05:03.984 { 00:05:03.984 "trtype": "tcp", 00:05:03.984 "method": "nvmf_get_transports", 00:05:03.984 "req_id": 1 00:05:03.984 } 00:05:03.984 Got JSON-RPC error response 00:05:03.984 response: 00:05:03.984 { 00:05:03.984 "code": -19, 00:05:03.984 "message": "No such device" 00:05:03.984 } 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.984 [2024-07-23 06:00:57.157693] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.984 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.984 { 00:05:03.984 "subsystems": [ 00:05:03.984 { 00:05:03.984 "subsystem": "vfio_user_target", 00:05:03.984 "config": null 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "keyring", 00:05:03.984 "config": [] 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "iobuf", 00:05:03.984 "config": [ 00:05:03.984 { 00:05:03.984 "method": "iobuf_set_options", 00:05:03.984 "params": { 00:05:03.984 "small_pool_count": 8192, 00:05:03.984 "large_pool_count": 1024, 00:05:03.984 "small_bufsize": 8192, 00:05:03.984 "large_bufsize": 135168 00:05:03.984 } 00:05:03.984 } 00:05:03.984 ] 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "sock", 00:05:03.984 "config": [ 00:05:03.984 { 00:05:03.984 "method": "sock_set_default_impl", 00:05:03.984 "params": { 00:05:03.984 "impl_name": "posix" 00:05:03.984 } 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "method": "sock_impl_set_options", 00:05:03.984 "params": { 00:05:03.984 "impl_name": "ssl", 00:05:03.984 "recv_buf_size": 4096, 00:05:03.984 "send_buf_size": 4096, 00:05:03.984 "enable_recv_pipe": true, 00:05:03.984 "enable_quickack": false, 00:05:03.984 "enable_placement_id": 0, 00:05:03.984 "enable_zerocopy_send_server": true, 00:05:03.984 "enable_zerocopy_send_client": false, 00:05:03.984 "zerocopy_threshold": 0, 00:05:03.984 "tls_version": 0, 00:05:03.984 "enable_ktls": false 00:05:03.984 } 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "method": "sock_impl_set_options", 00:05:03.984 "params": { 00:05:03.984 "impl_name": "posix", 00:05:03.984 "recv_buf_size": 2097152, 00:05:03.984 "send_buf_size": 2097152, 00:05:03.984 "enable_recv_pipe": true, 00:05:03.984 "enable_quickack": false, 00:05:03.984 "enable_placement_id": 0, 00:05:03.984 "enable_zerocopy_send_server": true, 00:05:03.984 "enable_zerocopy_send_client": false, 00:05:03.984 "zerocopy_threshold": 0, 00:05:03.984 "tls_version": 0, 00:05:03.984 "enable_ktls": false 00:05:03.984 } 00:05:03.984 } 00:05:03.984 ] 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "vmd", 00:05:03.984 "config": [] 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "accel", 00:05:03.984 "config": [ 00:05:03.984 { 00:05:03.984 "method": "accel_set_options", 00:05:03.984 "params": { 00:05:03.984 "small_cache_size": 128, 00:05:03.984 "large_cache_size": 16, 00:05:03.984 "task_count": 2048, 00:05:03.984 "sequence_count": 2048, 00:05:03.984 "buf_count": 2048 00:05:03.984 } 00:05:03.984 } 00:05:03.984 ] 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "bdev", 00:05:03.984 "config": [ 00:05:03.984 { 00:05:03.984 "method": "bdev_set_options", 00:05:03.984 "params": { 00:05:03.984 "bdev_io_pool_size": 65535, 00:05:03.984 "bdev_io_cache_size": 256, 00:05:03.984 "bdev_auto_examine": true, 00:05:03.984 "iobuf_small_cache_size": 128, 00:05:03.984 "iobuf_large_cache_size": 16 00:05:03.984 } 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "method": "bdev_raid_set_options", 00:05:03.984 "params": { 00:05:03.984 "process_window_size_kb": 1024, 00:05:03.984 "process_max_bandwidth_mb_sec": 0 00:05:03.984 } 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "method": "bdev_iscsi_set_options", 00:05:03.984 "params": { 00:05:03.984 "timeout_sec": 30 00:05:03.984 } 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "method": "bdev_nvme_set_options", 00:05:03.984 "params": { 00:05:03.984 "action_on_timeout": "none", 00:05:03.984 "timeout_us": 0, 00:05:03.984 "timeout_admin_us": 0, 00:05:03.984 "keep_alive_timeout_ms": 10000, 00:05:03.984 "arbitration_burst": 0, 00:05:03.984 "low_priority_weight": 0, 00:05:03.984 "medium_priority_weight": 0, 00:05:03.984 "high_priority_weight": 0, 00:05:03.984 "nvme_adminq_poll_period_us": 10000, 00:05:03.984 "nvme_ioq_poll_period_us": 0, 00:05:03.984 "io_queue_requests": 0, 00:05:03.984 "delay_cmd_submit": true, 00:05:03.984 "transport_retry_count": 4, 00:05:03.984 "bdev_retry_count": 3, 00:05:03.984 "transport_ack_timeout": 0, 00:05:03.984 "ctrlr_loss_timeout_sec": 0, 00:05:03.984 "reconnect_delay_sec": 0, 00:05:03.984 "fast_io_fail_timeout_sec": 0, 00:05:03.984 "disable_auto_failback": false, 00:05:03.984 "generate_uuids": false, 00:05:03.984 "transport_tos": 0, 00:05:03.984 "nvme_error_stat": false, 00:05:03.984 "rdma_srq_size": 0, 00:05:03.984 "io_path_stat": false, 00:05:03.984 "allow_accel_sequence": false, 00:05:03.984 "rdma_max_cq_size": 0, 00:05:03.984 "rdma_cm_event_timeout_ms": 0, 00:05:03.984 "dhchap_digests": [ 00:05:03.984 "sha256", 00:05:03.984 "sha384", 00:05:03.984 "sha512" 00:05:03.984 ], 00:05:03.984 "dhchap_dhgroups": [ 00:05:03.984 "null", 00:05:03.984 "ffdhe2048", 00:05:03.984 "ffdhe3072", 00:05:03.984 "ffdhe4096", 00:05:03.984 "ffdhe6144", 00:05:03.984 "ffdhe8192" 00:05:03.984 ] 00:05:03.984 } 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "method": "bdev_nvme_set_hotplug", 00:05:03.984 "params": { 00:05:03.984 "period_us": 100000, 00:05:03.984 "enable": false 00:05:03.984 } 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "method": "bdev_wait_for_examine" 00:05:03.984 } 00:05:03.984 ] 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "scsi", 00:05:03.984 "config": null 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "scheduler", 00:05:03.984 "config": [ 00:05:03.984 { 00:05:03.984 "method": "framework_set_scheduler", 00:05:03.984 "params": { 00:05:03.984 "name": "static" 00:05:03.984 } 00:05:03.984 } 00:05:03.984 ] 00:05:03.984 }, 00:05:03.984 { 00:05:03.984 "subsystem": "vhost_scsi", 00:05:03.985 "config": [] 00:05:03.985 }, 00:05:03.985 { 00:05:03.985 "subsystem": "vhost_blk", 00:05:03.985 "config": [] 00:05:03.985 }, 00:05:03.985 { 00:05:03.985 "subsystem": "ublk", 00:05:03.985 "config": [] 00:05:03.985 }, 00:05:03.985 { 00:05:03.985 "subsystem": "nbd", 00:05:03.985 "config": [] 00:05:03.985 }, 00:05:03.985 { 00:05:03.985 "subsystem": "nvmf", 00:05:03.985 "config": [ 00:05:03.985 { 00:05:03.985 "method": "nvmf_set_config", 00:05:03.985 "params": { 00:05:03.985 "discovery_filter": "match_any", 00:05:03.985 "admin_cmd_passthru": { 00:05:03.985 "identify_ctrlr": false 00:05:03.985 } 00:05:03.985 } 00:05:03.985 }, 00:05:03.985 { 00:05:03.985 "method": "nvmf_set_max_subsystems", 00:05:03.985 "params": { 00:05:03.985 "max_subsystems": 1024 00:05:03.985 } 00:05:03.985 }, 00:05:03.985 { 00:05:03.985 "method": "nvmf_set_crdt", 00:05:03.985 "params": { 00:05:03.985 "crdt1": 0, 00:05:03.985 "crdt2": 0, 00:05:03.985 "crdt3": 0 00:05:03.985 } 00:05:03.985 }, 00:05:03.985 { 00:05:03.985 "method": "nvmf_create_transport", 00:05:03.985 "params": { 00:05:03.985 "trtype": "TCP", 00:05:03.985 "max_queue_depth": 128, 00:05:03.985 "max_io_qpairs_per_ctrlr": 127, 00:05:03.985 "in_capsule_data_size": 4096, 00:05:03.985 "max_io_size": 131072, 00:05:03.985 "io_unit_size": 131072, 00:05:03.985 "max_aq_depth": 128, 00:05:03.985 "num_shared_buffers": 511, 00:05:03.985 "buf_cache_size": 4294967295, 00:05:03.985 "dif_insert_or_strip": false, 00:05:03.985 "zcopy": false, 00:05:03.985 "c2h_success": true, 00:05:03.985 "sock_priority": 0, 00:05:03.985 "abort_timeout_sec": 1, 00:05:03.985 "ack_timeout": 0, 00:05:03.985 "data_wr_pool_size": 0 00:05:03.985 } 00:05:03.985 } 00:05:03.985 ] 00:05:03.985 }, 00:05:03.985 { 00:05:03.985 "subsystem": "iscsi", 00:05:03.985 "config": [ 00:05:03.985 { 00:05:03.985 "method": "iscsi_set_options", 00:05:03.985 "params": { 00:05:03.985 "node_base": "iqn.2016-06.io.spdk", 00:05:03.985 "max_sessions": 128, 00:05:03.985 "max_connections_per_session": 2, 00:05:03.985 "max_queue_depth": 64, 00:05:03.985 "default_time2wait": 2, 00:05:03.985 "default_time2retain": 20, 00:05:03.985 "first_burst_length": 8192, 00:05:03.985 "immediate_data": true, 00:05:03.985 "allow_duplicated_isid": false, 00:05:03.985 "error_recovery_level": 0, 00:05:03.985 "nop_timeout": 60, 00:05:03.985 "nop_in_interval": 30, 00:05:03.985 "disable_chap": false, 00:05:03.985 "require_chap": false, 00:05:03.985 "mutual_chap": false, 00:05:03.985 "chap_group": 0, 00:05:03.985 "max_large_datain_per_connection": 64, 00:05:03.985 "max_r2t_per_connection": 4, 00:05:03.985 "pdu_pool_size": 36864, 00:05:03.985 "immediate_data_pool_size": 16384, 00:05:03.985 "data_out_pool_size": 2048 00:05:03.985 } 00:05:03.985 } 00:05:03.985 ] 00:05:03.985 } 00:05:03.985 ] 00:05:03.985 } 00:05:03.985 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:03.985 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1602301 00:05:03.985 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1602301 ']' 00:05:03.985 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1602301 00:05:03.985 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:03.985 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.985 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1602301 00:05:04.247 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.247 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.247 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1602301' 00:05:04.247 killing process with pid 1602301 00:05:04.247 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1602301 00:05:04.247 06:00:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1602301 00:05:04.507 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1602442 00:05:04.507 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.507 06:00:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:09.785 06:01:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1602442 00:05:09.785 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1602442 ']' 00:05:09.785 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1602442 00:05:09.785 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:09.786 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.786 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1602442 00:05:09.786 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.786 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.786 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1602442' 00:05:09.786 killing process with pid 1602442 00:05:09.786 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1602442 00:05:09.786 06:01:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1602442 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:10.061 00:05:10.061 real 0m6.513s 00:05:10.061 user 0m6.106s 00:05:10.061 sys 0m0.676s 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.061 ************************************ 00:05:10.061 END TEST skip_rpc_with_json 00:05:10.061 ************************************ 00:05:10.061 06:01:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.061 06:01:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:10.061 06:01:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.061 06:01:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.061 06:01:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.061 ************************************ 00:05:10.061 START TEST skip_rpc_with_delay 00:05:10.061 ************************************ 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.061 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.062 [2024-07-23 06:01:03.298354] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:10.062 [2024-07-23 06:01:03.298463] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.062 00:05:10.062 real 0m0.064s 00:05:10.062 user 0m0.045s 00:05:10.062 sys 0m0.019s 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.062 06:01:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:10.062 ************************************ 00:05:10.062 END TEST skip_rpc_with_delay 00:05:10.062 ************************************ 00:05:10.062 06:01:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.062 06:01:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:10.062 06:01:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:10.062 06:01:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:10.062 06:01:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.062 06:01:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.062 06:01:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.062 ************************************ 00:05:10.062 START TEST exit_on_failed_rpc_init 00:05:10.062 ************************************ 00:05:10.062 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:10.062 06:01:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1603160 00:05:10.063 06:01:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.063 06:01:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1603160 00:05:10.063 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1603160 ']' 00:05:10.063 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.063 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.063 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.063 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.063 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.326 [2024-07-23 06:01:03.408882] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:10.326 [2024-07-23 06:01:03.408988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603160 ] 00:05:10.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.326 [2024-07-23 06:01:03.442417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:10.326 [2024-07-23 06:01:03.472681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.326 [2024-07-23 06:01:03.566166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.584 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.584 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:10.584 06:01:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.584 06:01:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:10.585 06:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.585 [2024-07-23 06:01:03.872822] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:10.585 [2024-07-23 06:01:03.872898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603209 ] 00:05:10.585 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.585 [2024-07-23 06:01:03.904376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:10.844 [2024-07-23 06:01:03.934623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.844 [2024-07-23 06:01:04.027693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.844 [2024-07-23 06:01:04.027791] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:10.844 [2024-07-23 06:01:04.027810] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:10.845 [2024-07-23 06:01:04.027821] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1603160 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1603160 ']' 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1603160 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1603160 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1603160' 00:05:10.845 killing process with pid 1603160 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1603160 00:05:10.845 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1603160 00:05:11.412 00:05:11.412 real 0m1.195s 00:05:11.412 user 0m1.333s 00:05:11.412 sys 0m0.444s 00:05:11.412 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.412 06:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.412 ************************************ 00:05:11.412 END TEST exit_on_failed_rpc_init 00:05:11.412 ************************************ 00:05:11.412 06:01:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.412 06:01:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.412 00:05:11.412 real 0m13.470s 00:05:11.412 user 0m12.710s 00:05:11.412 sys 0m1.623s 00:05:11.412 06:01:04 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.412 06:01:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.412 ************************************ 00:05:11.412 END TEST skip_rpc 00:05:11.412 ************************************ 00:05:11.412 06:01:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.412 06:01:04 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:11.412 06:01:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.412 06:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.412 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:11.412 ************************************ 00:05:11.412 START TEST rpc_client 00:05:11.412 ************************************ 00:05:11.412 06:01:04 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:11.412 * Looking for test storage... 00:05:11.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:11.412 06:01:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:11.412 OK 00:05:11.412 06:01:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:11.412 00:05:11.412 real 0m0.062s 00:05:11.412 user 0m0.027s 00:05:11.412 sys 0m0.041s 00:05:11.412 06:01:04 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.412 06:01:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:11.412 ************************************ 00:05:11.412 END TEST rpc_client 00:05:11.412 ************************************ 00:05:11.412 06:01:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.412 06:01:04 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:11.412 06:01:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.412 06:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.412 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:11.412 ************************************ 00:05:11.412 START TEST json_config 00:05:11.412 ************************************ 00:05:11.412 06:01:04 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:11.672 06:01:04 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.672 06:01:04 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.672 06:01:04 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.672 06:01:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.672 06:01:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.672 06:01:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.672 06:01:04 json_config -- paths/export.sh@5 -- # export PATH 00:05:11.672 06:01:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@47 -- # : 0 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:11.672 06:01:04 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:11.672 INFO: JSON configuration test init 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 06:01:04 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:11.672 06:01:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.672 06:01:04 json_config -- json_config/common.sh@10 -- # shift 00:05:11.672 06:01:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.672 06:01:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.672 06:01:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.672 06:01:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.672 06:01:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.672 06:01:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1603412 00:05:11.672 06:01:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:11.672 06:01:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.672 Waiting for target to run... 00:05:11.672 06:01:04 json_config -- json_config/common.sh@25 -- # waitforlisten 1603412 /var/tmp/spdk_tgt.sock 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@829 -- # '[' -z 1603412 ']' 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.672 06:01:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.672 [2024-07-23 06:01:04.842078] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:11.672 [2024-07-23 06:01:04.842177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603412 ] 00:05:11.672 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.932 [2024-07-23 06:01:05.158717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:11.933 [2024-07-23 06:01:05.192669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.933 [2024-07-23 06:01:05.256479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.507 06:01:05 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.507 06:01:05 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:12.507 06:01:05 json_config -- json_config/common.sh@26 -- # echo '' 00:05:12.507 00:05:12.507 06:01:05 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:12.507 06:01:05 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:12.507 06:01:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.507 06:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.507 06:01:05 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:12.507 06:01:05 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:12.507 06:01:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.507 06:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.507 06:01:05 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:12.507 06:01:05 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:12.507 06:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:15.805 06:01:08 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:15.805 06:01:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:15.805 06:01:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.805 06:01:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.805 06:01:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:15.805 06:01:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:15.805 06:01:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:15.805 06:01:08 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:15.805 06:01:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:15.805 06:01:08 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@51 -- # sort 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:16.063 06:01:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.063 06:01:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:16.063 06:01:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.063 06:01:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:16.063 06:01:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.063 06:01:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.321 MallocForNvmf0 00:05:16.321 06:01:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:16.321 06:01:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:16.579 MallocForNvmf1 00:05:16.579 06:01:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:16.579 06:01:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:16.837 [2024-07-23 06:01:09.952323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.837 06:01:09 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.837 06:01:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.094 06:01:10 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.094 06:01:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.352 06:01:10 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:17.352 06:01:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:17.610 06:01:10 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:17.610 06:01:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:17.610 [2024-07-23 06:01:10.931493] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:17.610 06:01:10 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:17.610 06:01:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.611 06:01:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.868 06:01:10 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:17.868 06:01:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.868 06:01:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.868 06:01:10 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:17.868 06:01:10 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.868 06:01:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.125 MallocBdevForConfigChangeCheck 00:05:18.125 06:01:11 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:18.125 06:01:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.125 06:01:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.125 06:01:11 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:18.125 06:01:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.383 06:01:11 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:18.383 INFO: shutting down applications... 00:05:18.383 06:01:11 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:18.384 06:01:11 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:18.384 06:01:11 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:18.384 06:01:11 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:20.290 Calling clear_iscsi_subsystem 00:05:20.290 Calling clear_nvmf_subsystem 00:05:20.290 Calling clear_nbd_subsystem 00:05:20.290 Calling clear_ublk_subsystem 00:05:20.290 Calling clear_vhost_blk_subsystem 00:05:20.290 Calling clear_vhost_scsi_subsystem 00:05:20.290 Calling clear_bdev_subsystem 00:05:20.290 06:01:13 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:20.290 06:01:13 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:20.290 06:01:13 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:20.290 06:01:13 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.290 06:01:13 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:20.290 06:01:13 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:20.549 06:01:13 json_config -- json_config/json_config.sh@349 -- # break 00:05:20.549 06:01:13 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:20.549 06:01:13 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:20.549 06:01:13 json_config -- json_config/common.sh@31 -- # local app=target 00:05:20.549 06:01:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.549 06:01:13 json_config -- json_config/common.sh@35 -- # [[ -n 1603412 ]] 00:05:20.549 06:01:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1603412 00:05:20.549 06:01:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.549 06:01:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.549 06:01:13 json_config -- json_config/common.sh@41 -- # kill -0 1603412 00:05:20.549 06:01:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.120 06:01:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.120 06:01:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.120 06:01:14 json_config -- json_config/common.sh@41 -- # kill -0 1603412 00:05:21.120 06:01:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.120 06:01:14 json_config -- json_config/common.sh@43 -- # break 00:05:21.120 06:01:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.120 06:01:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.120 SPDK target shutdown done 00:05:21.120 06:01:14 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:21.120 INFO: relaunching applications... 00:05:21.120 06:01:14 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.120 06:01:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.120 06:01:14 json_config -- json_config/common.sh@10 -- # shift 00:05:21.120 06:01:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.120 06:01:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.120 06:01:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.120 06:01:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.120 06:01:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.120 06:01:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1604719 00:05:21.120 06:01:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.120 06:01:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.120 Waiting for target to run... 00:05:21.120 06:01:14 json_config -- json_config/common.sh@25 -- # waitforlisten 1604719 /var/tmp/spdk_tgt.sock 00:05:21.120 06:01:14 json_config -- common/autotest_common.sh@829 -- # '[' -z 1604719 ']' 00:05:21.120 06:01:14 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.120 06:01:14 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.120 06:01:14 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.120 06:01:14 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.120 06:01:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.120 [2024-07-23 06:01:14.215062] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:21.120 [2024-07-23 06:01:14.215164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604719 ] 00:05:21.120 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.380 [2024-07-23 06:01:14.537599] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.380 [2024-07-23 06:01:14.571779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.380 [2024-07-23 06:01:14.638601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.669 [2024-07-23 06:01:17.670482] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.669 [2024-07-23 06:01:17.702954] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.669 06:01:17 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.669 06:01:17 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:24.669 06:01:17 json_config -- json_config/common.sh@26 -- # echo '' 00:05:24.669 00:05:24.669 06:01:17 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:24.669 06:01:17 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:24.669 INFO: Checking if target configuration is the same... 00:05:24.669 06:01:17 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.669 06:01:17 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:24.669 06:01:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.669 + '[' 2 -ne 2 ']' 00:05:24.669 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.669 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.669 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.669 +++ basename /dev/fd/62 00:05:24.669 ++ mktemp /tmp/62.XXX 00:05:24.669 + tmp_file_1=/tmp/62.nfc 00:05:24.669 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.669 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.669 + tmp_file_2=/tmp/spdk_tgt_config.json.jOn 00:05:24.669 + ret=0 00:05:24.669 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.927 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.927 + diff -u /tmp/62.nfc /tmp/spdk_tgt_config.json.jOn 00:05:24.927 + echo 'INFO: JSON config files are the same' 00:05:24.927 INFO: JSON config files are the same 00:05:24.927 + rm /tmp/62.nfc /tmp/spdk_tgt_config.json.jOn 00:05:24.927 + exit 0 00:05:24.927 06:01:18 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:24.927 06:01:18 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:24.927 INFO: changing configuration and checking if this can be detected... 00:05:24.927 06:01:18 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.927 06:01:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.184 06:01:18 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.184 06:01:18 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:25.184 06:01:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.184 + '[' 2 -ne 2 ']' 00:05:25.184 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:25.184 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:25.184 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.184 +++ basename /dev/fd/62 00:05:25.184 ++ mktemp /tmp/62.XXX 00:05:25.184 + tmp_file_1=/tmp/62.9Wb 00:05:25.184 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.184 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:25.184 + tmp_file_2=/tmp/spdk_tgt_config.json.70A 00:05:25.184 + ret=0 00:05:25.184 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.750 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.750 + diff -u /tmp/62.9Wb /tmp/spdk_tgt_config.json.70A 00:05:25.750 + ret=1 00:05:25.750 + echo '=== Start of file: /tmp/62.9Wb ===' 00:05:25.750 + cat /tmp/62.9Wb 00:05:25.750 + echo '=== End of file: /tmp/62.9Wb ===' 00:05:25.750 + echo '' 00:05:25.750 + echo '=== Start of file: /tmp/spdk_tgt_config.json.70A ===' 00:05:25.750 + cat /tmp/spdk_tgt_config.json.70A 00:05:25.750 + echo '=== End of file: /tmp/spdk_tgt_config.json.70A ===' 00:05:25.750 + echo '' 00:05:25.750 + rm /tmp/62.9Wb /tmp/spdk_tgt_config.json.70A 00:05:25.750 + exit 1 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:25.750 INFO: configuration change detected. 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@321 -- # [[ -n 1604719 ]] 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.750 06:01:18 json_config -- json_config/json_config.sh@327 -- # killprocess 1604719 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@948 -- # '[' -z 1604719 ']' 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@952 -- # kill -0 1604719 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@953 -- # uname 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1604719 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1604719' 00:05:25.750 killing process with pid 1604719 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@967 -- # kill 1604719 00:05:25.750 06:01:18 json_config -- common/autotest_common.sh@972 -- # wait 1604719 00:05:27.654 06:01:20 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.654 06:01:20 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:27.654 06:01:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.654 06:01:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.654 06:01:20 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:27.654 06:01:20 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:27.654 INFO: Success 00:05:27.654 00:05:27.654 real 0m15.774s 00:05:27.654 user 0m17.647s 00:05:27.654 sys 0m1.840s 00:05:27.654 06:01:20 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.654 06:01:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.654 ************************************ 00:05:27.654 END TEST json_config 00:05:27.654 ************************************ 00:05:27.654 06:01:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.654 06:01:20 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.654 06:01:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.654 06:01:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.654 06:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:27.654 ************************************ 00:05:27.654 START TEST json_config_extra_key 00:05:27.654 ************************************ 00:05:27.654 06:01:20 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.654 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.654 06:01:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.655 06:01:20 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.655 06:01:20 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.655 06:01:20 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.655 06:01:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.655 06:01:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.655 06:01:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.655 06:01:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:27.655 06:01:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.655 06:01:20 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:27.655 INFO: launching applications... 00:05:27.655 06:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1605520 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.655 Waiting for target to run... 00:05:27.655 06:01:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1605520 /var/tmp/spdk_tgt.sock 00:05:27.655 06:01:20 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1605520 ']' 00:05:27.655 06:01:20 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.655 06:01:20 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.655 06:01:20 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.655 06:01:20 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.655 06:01:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.655 [2024-07-23 06:01:20.657234] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:27.655 [2024-07-23 06:01:20.657331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605520 ] 00:05:27.655 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.655 [2024-07-23 06:01:20.966126] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.913 [2024-07-23 06:01:20.999959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.913 [2024-07-23 06:01:21.063039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.480 06:01:21 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.480 06:01:21 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:28.480 00:05:28.480 06:01:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:28.480 INFO: shutting down applications... 00:05:28.480 06:01:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1605520 ]] 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1605520 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1605520 00:05:28.480 06:01:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.047 06:01:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.047 06:01:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.047 06:01:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1605520 00:05:29.047 06:01:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.047 06:01:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:29.047 06:01:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.047 06:01:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.047 SPDK target shutdown done 00:05:29.047 06:01:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:29.047 Success 00:05:29.047 00:05:29.047 real 0m1.560s 00:05:29.047 user 0m1.536s 00:05:29.047 sys 0m0.441s 00:05:29.047 06:01:22 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.047 06:01:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.047 ************************************ 00:05:29.047 END TEST json_config_extra_key 00:05:29.047 ************************************ 00:05:29.047 06:01:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.047 06:01:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.047 06:01:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.047 06:01:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.047 06:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.047 ************************************ 00:05:29.047 START TEST alias_rpc 00:05:29.047 ************************************ 00:05:29.047 06:01:22 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.047 * Looking for test storage... 00:05:29.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:29.047 06:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.047 06:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1605817 00:05:29.047 06:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.047 06:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1605817 00:05:29.047 06:01:22 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1605817 ']' 00:05:29.047 06:01:22 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.047 06:01:22 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.047 06:01:22 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.047 06:01:22 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.047 06:01:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.047 [2024-07-23 06:01:22.271698] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:29.047 [2024-07-23 06:01:22.271773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605817 ] 00:05:29.047 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.048 [2024-07-23 06:01:22.306335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.048 [2024-07-23 06:01:22.331961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.307 [2024-07-23 06:01:22.423162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.570 06:01:22 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.570 06:01:22 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.570 06:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:29.830 06:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1605817 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1605817 ']' 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1605817 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1605817 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1605817' 00:05:29.830 killing process with pid 1605817 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@967 -- # kill 1605817 00:05:29.830 06:01:22 alias_rpc -- common/autotest_common.sh@972 -- # wait 1605817 00:05:30.097 00:05:30.097 real 0m1.209s 00:05:30.097 user 0m1.292s 00:05:30.097 sys 0m0.418s 00:05:30.097 06:01:23 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.097 06:01:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.097 ************************************ 00:05:30.097 END TEST alias_rpc 00:05:30.097 ************************************ 00:05:30.097 06:01:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.097 06:01:23 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:30.097 06:01:23 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.097 06:01:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.097 06:01:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.097 06:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.097 ************************************ 00:05:30.097 START TEST spdkcli_tcp 00:05:30.097 ************************************ 00:05:30.097 06:01:23 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.365 * Looking for test storage... 00:05:30.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:30.365 06:01:23 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.365 06:01:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1606014 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:30.365 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1606014 00:05:30.365 06:01:23 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1606014 ']' 00:05:30.365 06:01:23 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.365 06:01:23 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.365 06:01:23 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.365 06:01:23 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.365 06:01:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 [2024-07-23 06:01:23.522736] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:30.365 [2024-07-23 06:01:23.522827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606014 ] 00:05:30.365 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.365 [2024-07-23 06:01:23.556588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.365 [2024-07-23 06:01:23.584086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.365 [2024-07-23 06:01:23.668422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.365 [2024-07-23 06:01:23.668425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.623 06:01:23 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.623 06:01:23 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:30.623 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1606024 00:05:30.623 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.623 06:01:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:30.882 [ 00:05:30.882 "bdev_malloc_delete", 00:05:30.882 "bdev_malloc_create", 00:05:30.882 "bdev_null_resize", 00:05:30.882 "bdev_null_delete", 00:05:30.882 "bdev_null_create", 00:05:30.882 "bdev_nvme_cuse_unregister", 00:05:30.882 "bdev_nvme_cuse_register", 00:05:30.882 "bdev_opal_new_user", 00:05:30.882 "bdev_opal_set_lock_state", 00:05:30.882 "bdev_opal_delete", 00:05:30.882 "bdev_opal_get_info", 00:05:30.882 "bdev_opal_create", 00:05:30.882 "bdev_nvme_opal_revert", 00:05:30.882 "bdev_nvme_opal_init", 00:05:30.882 "bdev_nvme_send_cmd", 00:05:30.882 "bdev_nvme_get_path_iostat", 00:05:30.882 "bdev_nvme_get_mdns_discovery_info", 00:05:30.882 "bdev_nvme_stop_mdns_discovery", 00:05:30.882 "bdev_nvme_start_mdns_discovery", 00:05:30.882 "bdev_nvme_set_multipath_policy", 00:05:30.882 "bdev_nvme_set_preferred_path", 00:05:30.882 "bdev_nvme_get_io_paths", 00:05:30.882 "bdev_nvme_remove_error_injection", 00:05:30.882 "bdev_nvme_add_error_injection", 00:05:30.882 "bdev_nvme_get_discovery_info", 00:05:30.882 "bdev_nvme_stop_discovery", 00:05:30.882 "bdev_nvme_start_discovery", 00:05:30.882 "bdev_nvme_get_controller_health_info", 00:05:30.882 "bdev_nvme_disable_controller", 00:05:30.882 "bdev_nvme_enable_controller", 00:05:30.882 "bdev_nvme_reset_controller", 00:05:30.882 "bdev_nvme_get_transport_statistics", 00:05:30.882 "bdev_nvme_apply_firmware", 00:05:30.882 "bdev_nvme_detach_controller", 00:05:30.882 "bdev_nvme_get_controllers", 00:05:30.882 "bdev_nvme_attach_controller", 00:05:30.882 "bdev_nvme_set_hotplug", 00:05:30.882 "bdev_nvme_set_options", 00:05:30.882 "bdev_passthru_delete", 00:05:30.882 "bdev_passthru_create", 00:05:30.882 "bdev_lvol_set_parent_bdev", 00:05:30.882 "bdev_lvol_set_parent", 00:05:30.882 "bdev_lvol_check_shallow_copy", 00:05:30.882 "bdev_lvol_start_shallow_copy", 00:05:30.882 "bdev_lvol_grow_lvstore", 00:05:30.882 "bdev_lvol_get_lvols", 00:05:30.882 "bdev_lvol_get_lvstores", 00:05:30.882 "bdev_lvol_delete", 00:05:30.882 "bdev_lvol_set_read_only", 00:05:30.882 "bdev_lvol_resize", 00:05:30.882 "bdev_lvol_decouple_parent", 00:05:30.882 "bdev_lvol_inflate", 00:05:30.882 "bdev_lvol_rename", 00:05:30.882 "bdev_lvol_clone_bdev", 00:05:30.882 "bdev_lvol_clone", 00:05:30.882 "bdev_lvol_snapshot", 00:05:30.882 "bdev_lvol_create", 00:05:30.882 "bdev_lvol_delete_lvstore", 00:05:30.882 "bdev_lvol_rename_lvstore", 00:05:30.882 "bdev_lvol_create_lvstore", 00:05:30.882 "bdev_raid_set_options", 00:05:30.882 "bdev_raid_remove_base_bdev", 00:05:30.882 "bdev_raid_add_base_bdev", 00:05:30.882 "bdev_raid_delete", 00:05:30.882 "bdev_raid_create", 00:05:30.882 "bdev_raid_get_bdevs", 00:05:30.882 "bdev_error_inject_error", 00:05:30.882 "bdev_error_delete", 00:05:30.882 "bdev_error_create", 00:05:30.882 "bdev_split_delete", 00:05:30.882 "bdev_split_create", 00:05:30.882 "bdev_delay_delete", 00:05:30.882 "bdev_delay_create", 00:05:30.882 "bdev_delay_update_latency", 00:05:30.882 "bdev_zone_block_delete", 00:05:30.882 "bdev_zone_block_create", 00:05:30.882 "blobfs_create", 00:05:30.882 "blobfs_detect", 00:05:30.882 "blobfs_set_cache_size", 00:05:30.882 "bdev_aio_delete", 00:05:30.882 "bdev_aio_rescan", 00:05:30.882 "bdev_aio_create", 00:05:30.882 "bdev_ftl_set_property", 00:05:30.882 "bdev_ftl_get_properties", 00:05:30.882 "bdev_ftl_get_stats", 00:05:30.882 "bdev_ftl_unmap", 00:05:30.882 "bdev_ftl_unload", 00:05:30.882 "bdev_ftl_delete", 00:05:30.882 "bdev_ftl_load", 00:05:30.882 "bdev_ftl_create", 00:05:30.882 "bdev_virtio_attach_controller", 00:05:30.882 "bdev_virtio_scsi_get_devices", 00:05:30.882 "bdev_virtio_detach_controller", 00:05:30.882 "bdev_virtio_blk_set_hotplug", 00:05:30.882 "bdev_iscsi_delete", 00:05:30.882 "bdev_iscsi_create", 00:05:30.882 "bdev_iscsi_set_options", 00:05:30.882 "accel_error_inject_error", 00:05:30.882 "ioat_scan_accel_module", 00:05:30.882 "dsa_scan_accel_module", 00:05:30.882 "iaa_scan_accel_module", 00:05:30.882 "vfu_virtio_create_scsi_endpoint", 00:05:30.882 "vfu_virtio_scsi_remove_target", 00:05:30.882 "vfu_virtio_scsi_add_target", 00:05:30.882 "vfu_virtio_create_blk_endpoint", 00:05:30.882 "vfu_virtio_delete_endpoint", 00:05:30.882 "keyring_file_remove_key", 00:05:30.882 "keyring_file_add_key", 00:05:30.882 "keyring_linux_set_options", 00:05:30.882 "iscsi_get_histogram", 00:05:30.882 "iscsi_enable_histogram", 00:05:30.882 "iscsi_set_options", 00:05:30.882 "iscsi_get_auth_groups", 00:05:30.882 "iscsi_auth_group_remove_secret", 00:05:30.882 "iscsi_auth_group_add_secret", 00:05:30.882 "iscsi_delete_auth_group", 00:05:30.882 "iscsi_create_auth_group", 00:05:30.882 "iscsi_set_discovery_auth", 00:05:30.882 "iscsi_get_options", 00:05:30.882 "iscsi_target_node_request_logout", 00:05:30.882 "iscsi_target_node_set_redirect", 00:05:30.882 "iscsi_target_node_set_auth", 00:05:30.882 "iscsi_target_node_add_lun", 00:05:30.882 "iscsi_get_stats", 00:05:30.882 "iscsi_get_connections", 00:05:30.882 "iscsi_portal_group_set_auth", 00:05:30.882 "iscsi_start_portal_group", 00:05:30.882 "iscsi_delete_portal_group", 00:05:30.882 "iscsi_create_portal_group", 00:05:30.882 "iscsi_get_portal_groups", 00:05:30.882 "iscsi_delete_target_node", 00:05:30.882 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.882 "iscsi_target_node_add_pg_ig_maps", 00:05:30.882 "iscsi_create_target_node", 00:05:30.882 "iscsi_get_target_nodes", 00:05:30.882 "iscsi_delete_initiator_group", 00:05:30.882 "iscsi_initiator_group_remove_initiators", 00:05:30.882 "iscsi_initiator_group_add_initiators", 00:05:30.882 "iscsi_create_initiator_group", 00:05:30.882 "iscsi_get_initiator_groups", 00:05:30.882 "nvmf_set_crdt", 00:05:30.882 "nvmf_set_config", 00:05:30.882 "nvmf_set_max_subsystems", 00:05:30.882 "nvmf_stop_mdns_prr", 00:05:30.882 "nvmf_publish_mdns_prr", 00:05:30.882 "nvmf_subsystem_get_listeners", 00:05:30.882 "nvmf_subsystem_get_qpairs", 00:05:30.882 "nvmf_subsystem_get_controllers", 00:05:30.882 "nvmf_get_stats", 00:05:30.882 "nvmf_get_transports", 00:05:30.882 "nvmf_create_transport", 00:05:30.882 "nvmf_get_targets", 00:05:30.882 "nvmf_delete_target", 00:05:30.882 "nvmf_create_target", 00:05:30.882 "nvmf_subsystem_allow_any_host", 00:05:30.882 "nvmf_subsystem_remove_host", 00:05:30.882 "nvmf_subsystem_add_host", 00:05:30.882 "nvmf_ns_remove_host", 00:05:30.882 "nvmf_ns_add_host", 00:05:30.882 "nvmf_subsystem_remove_ns", 00:05:30.882 "nvmf_subsystem_add_ns", 00:05:30.882 "nvmf_subsystem_listener_set_ana_state", 00:05:30.882 "nvmf_discovery_get_referrals", 00:05:30.882 "nvmf_discovery_remove_referral", 00:05:30.882 "nvmf_discovery_add_referral", 00:05:30.882 "nvmf_subsystem_remove_listener", 00:05:30.882 "nvmf_subsystem_add_listener", 00:05:30.882 "nvmf_delete_subsystem", 00:05:30.882 "nvmf_create_subsystem", 00:05:30.882 "nvmf_get_subsystems", 00:05:30.882 "env_dpdk_get_mem_stats", 00:05:30.882 "nbd_get_disks", 00:05:30.882 "nbd_stop_disk", 00:05:30.882 "nbd_start_disk", 00:05:30.882 "ublk_recover_disk", 00:05:30.882 "ublk_get_disks", 00:05:30.882 "ublk_stop_disk", 00:05:30.882 "ublk_start_disk", 00:05:30.882 "ublk_destroy_target", 00:05:30.882 "ublk_create_target", 00:05:30.882 "virtio_blk_create_transport", 00:05:30.882 "virtio_blk_get_transports", 00:05:30.882 "vhost_controller_set_coalescing", 00:05:30.883 "vhost_get_controllers", 00:05:30.883 "vhost_delete_controller", 00:05:30.883 "vhost_create_blk_controller", 00:05:30.883 "vhost_scsi_controller_remove_target", 00:05:30.883 "vhost_scsi_controller_add_target", 00:05:30.883 "vhost_start_scsi_controller", 00:05:30.883 "vhost_create_scsi_controller", 00:05:30.883 "thread_set_cpumask", 00:05:30.883 "framework_get_governor", 00:05:30.883 "framework_get_scheduler", 00:05:30.883 "framework_set_scheduler", 00:05:30.883 "framework_get_reactors", 00:05:30.883 "thread_get_io_channels", 00:05:30.883 "thread_get_pollers", 00:05:30.883 "thread_get_stats", 00:05:30.883 "framework_monitor_context_switch", 00:05:30.883 "spdk_kill_instance", 00:05:30.883 "log_enable_timestamps", 00:05:30.883 "log_get_flags", 00:05:30.883 "log_clear_flag", 00:05:30.883 "log_set_flag", 00:05:30.883 "log_get_level", 00:05:30.883 "log_set_level", 00:05:30.883 "log_get_print_level", 00:05:30.883 "log_set_print_level", 00:05:30.883 "framework_enable_cpumask_locks", 00:05:30.883 "framework_disable_cpumask_locks", 00:05:30.883 "framework_wait_init", 00:05:30.883 "framework_start_init", 00:05:30.883 "scsi_get_devices", 00:05:30.883 "bdev_get_histogram", 00:05:30.883 "bdev_enable_histogram", 00:05:30.883 "bdev_set_qos_limit", 00:05:30.883 "bdev_set_qd_sampling_period", 00:05:30.883 "bdev_get_bdevs", 00:05:30.883 "bdev_reset_iostat", 00:05:30.883 "bdev_get_iostat", 00:05:30.883 "bdev_examine", 00:05:30.883 "bdev_wait_for_examine", 00:05:30.883 "bdev_set_options", 00:05:30.883 "notify_get_notifications", 00:05:30.883 "notify_get_types", 00:05:30.883 "accel_get_stats", 00:05:30.883 "accel_set_options", 00:05:30.883 "accel_set_driver", 00:05:30.883 "accel_crypto_key_destroy", 00:05:30.883 "accel_crypto_keys_get", 00:05:30.883 "accel_crypto_key_create", 00:05:30.883 "accel_assign_opc", 00:05:30.883 "accel_get_module_info", 00:05:30.883 "accel_get_opc_assignments", 00:05:30.883 "vmd_rescan", 00:05:30.883 "vmd_remove_device", 00:05:30.883 "vmd_enable", 00:05:30.883 "sock_get_default_impl", 00:05:30.883 "sock_set_default_impl", 00:05:30.883 "sock_impl_set_options", 00:05:30.883 "sock_impl_get_options", 00:05:30.883 "iobuf_get_stats", 00:05:30.883 "iobuf_set_options", 00:05:30.883 "keyring_get_keys", 00:05:30.883 "framework_get_pci_devices", 00:05:30.883 "framework_get_config", 00:05:30.883 "framework_get_subsystems", 00:05:30.883 "vfu_tgt_set_base_path", 00:05:30.883 "trace_get_info", 00:05:30.883 "trace_get_tpoint_group_mask", 00:05:30.883 "trace_disable_tpoint_group", 00:05:30.883 "trace_enable_tpoint_group", 00:05:30.883 "trace_clear_tpoint_mask", 00:05:30.883 "trace_set_tpoint_mask", 00:05:30.883 "spdk_get_version", 00:05:30.883 "rpc_get_methods" 00:05:30.883 ] 00:05:30.883 06:01:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.883 06:01:24 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.883 06:01:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 06:01:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:30.883 06:01:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1606014 00:05:30.883 06:01:24 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1606014 ']' 00:05:30.883 06:01:24 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1606014 00:05:30.883 06:01:24 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:30.883 06:01:24 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.883 06:01:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1606014 00:05:31.143 06:01:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.143 06:01:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.143 06:01:24 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1606014' 00:05:31.143 killing process with pid 1606014 00:05:31.143 06:01:24 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1606014 00:05:31.143 06:01:24 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1606014 00:05:31.402 00:05:31.402 real 0m1.220s 00:05:31.402 user 0m2.185s 00:05:31.402 sys 0m0.445s 00:05:31.402 06:01:24 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.402 06:01:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.402 ************************************ 00:05:31.402 END TEST spdkcli_tcp 00:05:31.402 ************************************ 00:05:31.402 06:01:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.402 06:01:24 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.402 06:01:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.402 06:01:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.402 06:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.402 ************************************ 00:05:31.402 START TEST dpdk_mem_utility 00:05:31.402 ************************************ 00:05:31.402 06:01:24 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.402 * Looking for test storage... 00:05:31.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.402 06:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.402 06:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1606220 00:05:31.403 06:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.403 06:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1606220 00:05:31.403 06:01:24 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1606220 ']' 00:05:31.403 06:01:24 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.403 06:01:24 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.403 06:01:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.403 06:01:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.403 06:01:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.662 [2024-07-23 06:01:24.785210] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:31.662 [2024-07-23 06:01:24.785288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606220 ] 00:05:31.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.662 [2024-07-23 06:01:24.815519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.662 [2024-07-23 06:01:24.843353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.662 [2024-07-23 06:01:24.933621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.920 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.920 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:31.920 06:01:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:31.920 06:01:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:31.920 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.920 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.920 { 00:05:31.920 "filename": "/tmp/spdk_mem_dump.txt" 00:05:31.920 } 00:05:31.920 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.920 06:01:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.920 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:31.920 1 heaps totaling size 814.000000 MiB 00:05:31.920 size: 814.000000 MiB heap id: 0 00:05:31.920 end heaps---------- 00:05:31.920 8 mempools totaling size 598.116089 MiB 00:05:31.920 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:31.920 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:31.920 size: 84.521057 MiB name: bdev_io_1606220 00:05:31.920 size: 51.011292 MiB name: evtpool_1606220 00:05:31.920 size: 50.003479 MiB name: msgpool_1606220 00:05:31.920 size: 21.763794 MiB name: PDU_Pool 00:05:31.920 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:31.920 size: 0.026123 MiB name: Session_Pool 00:05:31.920 end mempools------- 00:05:31.920 6 memzones totaling size 4.142822 MiB 00:05:31.920 size: 1.000366 MiB name: RG_ring_0_1606220 00:05:31.920 size: 1.000366 MiB name: RG_ring_1_1606220 00:05:31.920 size: 1.000366 MiB name: RG_ring_4_1606220 00:05:31.920 size: 1.000366 MiB name: RG_ring_5_1606220 00:05:31.920 size: 0.125366 MiB name: RG_ring_2_1606220 00:05:31.920 size: 0.015991 MiB name: RG_ring_3_1606220 00:05:31.920 end memzones------- 00:05:31.921 06:01:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:32.180 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:32.180 list of free elements. size: 12.519348 MiB 00:05:32.180 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:32.180 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:32.180 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:32.180 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:32.180 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:32.180 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:32.180 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:32.180 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:32.180 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:32.180 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:32.180 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:32.180 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:32.180 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:32.180 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:32.180 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:32.180 list of standard malloc elements. size: 199.218079 MiB 00:05:32.180 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:32.180 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:32.180 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:32.180 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:32.180 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:32.180 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:32.180 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:32.180 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:32.180 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:32.180 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:32.180 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:32.180 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:32.181 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:32.181 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:32.181 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:32.181 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:32.181 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:32.181 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:32.181 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:32.181 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:32.181 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:32.181 list of memzone associated elements. size: 602.262573 MiB 00:05:32.181 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:32.181 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:32.181 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:32.181 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:32.181 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:32.181 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1606220_0 00:05:32.181 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:32.181 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1606220_0 00:05:32.181 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:32.181 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1606220_0 00:05:32.181 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:32.181 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:32.181 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:32.181 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:32.181 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:32.181 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1606220 00:05:32.181 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:32.181 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1606220 00:05:32.181 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:32.181 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1606220 00:05:32.181 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:32.181 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:32.181 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:32.181 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:32.181 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:32.181 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:32.181 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:32.181 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:32.181 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:32.181 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1606220 00:05:32.181 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:32.181 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1606220 00:05:32.181 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:32.181 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1606220 00:05:32.181 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:32.181 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1606220 00:05:32.181 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:32.181 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1606220 00:05:32.181 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:32.181 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:32.181 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:32.181 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:32.181 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:32.181 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:32.181 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:32.181 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1606220 00:05:32.181 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:32.181 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:32.181 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:32.181 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:32.181 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:32.181 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1606220 00:05:32.181 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:32.181 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:32.181 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:32.181 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1606220 00:05:32.181 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:32.181 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1606220 00:05:32.181 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:32.181 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:32.181 06:01:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:32.181 06:01:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1606220 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1606220 ']' 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1606220 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1606220 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1606220' 00:05:32.181 killing process with pid 1606220 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1606220 00:05:32.181 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1606220 00:05:32.440 00:05:32.440 real 0m1.062s 00:05:32.440 user 0m1.027s 00:05:32.440 sys 0m0.415s 00:05:32.440 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.440 06:01:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.440 ************************************ 00:05:32.440 END TEST dpdk_mem_utility 00:05:32.440 ************************************ 00:05:32.440 06:01:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.440 06:01:25 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.440 06:01:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.440 06:01:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.440 06:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:32.699 ************************************ 00:05:32.699 START TEST event 00:05:32.699 ************************************ 00:05:32.699 06:01:25 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.699 * Looking for test storage... 00:05:32.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.699 06:01:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:32.699 06:01:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:32.699 06:01:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.699 06:01:25 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:32.699 06:01:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.699 06:01:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.699 ************************************ 00:05:32.699 START TEST event_perf 00:05:32.699 ************************************ 00:05:32.699 06:01:25 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.699 Running I/O for 1 seconds...[2024-07-23 06:01:25.870239] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:32.699 [2024-07-23 06:01:25.870302] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606407 ] 00:05:32.699 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.699 [2024-07-23 06:01:25.900892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.699 [2024-07-23 06:01:25.930722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.699 [2024-07-23 06:01:26.023257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.699 [2024-07-23 06:01:26.023327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.699 [2024-07-23 06:01:26.023421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.699 [2024-07-23 06:01:26.023424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.080 Running I/O for 1 seconds... 00:05:34.080 lcore 0: 231491 00:05:34.080 lcore 1: 231493 00:05:34.080 lcore 2: 231491 00:05:34.080 lcore 3: 231491 00:05:34.080 done. 00:05:34.080 00:05:34.080 real 0m1.248s 00:05:34.080 user 0m4.156s 00:05:34.080 sys 0m0.088s 00:05:34.081 06:01:27 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.081 06:01:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.081 ************************************ 00:05:34.081 END TEST event_perf 00:05:34.081 ************************************ 00:05:34.081 06:01:27 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.081 06:01:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.081 06:01:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:34.081 06:01:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.081 06:01:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.081 ************************************ 00:05:34.081 START TEST event_reactor 00:05:34.081 ************************************ 00:05:34.081 06:01:27 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.081 [2024-07-23 06:01:27.165431] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:34.081 [2024-07-23 06:01:27.165494] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606565 ] 00:05:34.081 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.081 [2024-07-23 06:01:27.200911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.081 [2024-07-23 06:01:27.232520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.081 [2024-07-23 06:01:27.326258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.464 test_start 00:05:35.464 oneshot 00:05:35.464 tick 100 00:05:35.464 tick 100 00:05:35.464 tick 250 00:05:35.464 tick 100 00:05:35.464 tick 100 00:05:35.464 tick 250 00:05:35.464 tick 100 00:05:35.464 tick 500 00:05:35.464 tick 100 00:05:35.464 tick 100 00:05:35.464 tick 250 00:05:35.464 tick 100 00:05:35.464 tick 100 00:05:35.464 test_end 00:05:35.464 00:05:35.464 real 0m1.255s 00:05:35.464 user 0m1.163s 00:05:35.464 sys 0m0.087s 00:05:35.464 06:01:28 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.464 06:01:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:35.464 ************************************ 00:05:35.464 END TEST event_reactor 00:05:35.464 ************************************ 00:05:35.464 06:01:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.464 06:01:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.464 06:01:28 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:35.464 06:01:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.464 06:01:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.464 ************************************ 00:05:35.464 START TEST event_reactor_perf 00:05:35.464 ************************************ 00:05:35.464 06:01:28 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.464 [2024-07-23 06:01:28.471977] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:35.464 [2024-07-23 06:01:28.472044] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606723 ] 00:05:35.464 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.464 [2024-07-23 06:01:28.505328] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.464 [2024-07-23 06:01:28.534634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.464 [2024-07-23 06:01:28.627720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.404 test_start 00:05:36.404 test_end 00:05:36.404 Performance: 355971 events per second 00:05:36.404 00:05:36.404 real 0m1.251s 00:05:36.404 user 0m1.163s 00:05:36.404 sys 0m0.082s 00:05:36.404 06:01:29 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.404 06:01:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.404 ************************************ 00:05:36.404 END TEST event_reactor_perf 00:05:36.405 ************************************ 00:05:36.405 06:01:29 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.405 06:01:29 event -- event/event.sh@49 -- # uname -s 00:05:36.405 06:01:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.405 06:01:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.405 06:01:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.405 06:01:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.405 06:01:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.662 ************************************ 00:05:36.662 START TEST event_scheduler 00:05:36.662 ************************************ 00:05:36.662 06:01:29 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.662 * Looking for test storage... 00:05:36.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:36.662 06:01:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.662 06:01:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1606960 00:05:36.662 06:01:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.662 06:01:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.662 06:01:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1606960 00:05:36.662 06:01:29 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1606960 ']' 00:05:36.662 06:01:29 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.662 06:01:29 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.662 06:01:29 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.662 06:01:29 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.662 06:01:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.662 [2024-07-23 06:01:29.856314] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:36.662 [2024-07-23 06:01:29.856421] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606960 ] 00:05:36.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.662 [2024-07-23 06:01:29.896329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.662 [2024-07-23 06:01:29.922348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.923 [2024-07-23 06:01:30.010317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.923 [2024-07-23 06:01:30.010358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.923 [2024-07-23 06:01:30.010447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.923 [2024-07-23 06:01:30.010450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:36.923 06:01:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 [2024-07-23 06:01:30.071314] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:36.923 [2024-07-23 06:01:30.071374] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:36.923 [2024-07-23 06:01:30.071408] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:36.923 [2024-07-23 06:01:30.071419] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:36.923 [2024-07-23 06:01:30.071429] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 [2024-07-23 06:01:30.163297] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 ************************************ 00:05:36.923 START TEST scheduler_create_thread 00:05:36.923 ************************************ 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 2 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 3 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 4 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 5 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 6 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 7 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 8 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 9 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 10 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.923 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.190 06:01:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.124 06:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.124 00:05:38.125 real 0m1.169s 00:05:38.125 user 0m0.011s 00:05:38.125 sys 0m0.003s 00:05:38.125 06:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.125 06:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.125 ************************************ 00:05:38.125 END TEST scheduler_create_thread 00:05:38.125 ************************************ 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:38.125 06:01:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.125 06:01:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1606960 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1606960 ']' 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1606960 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1606960 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1606960' 00:05:38.125 killing process with pid 1606960 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1606960 00:05:38.125 06:01:31 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1606960 00:05:38.689 [2024-07-23 06:01:31.836929] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.948 00:05:38.948 real 0m2.298s 00:05:38.948 user 0m2.677s 00:05:38.948 sys 0m0.318s 00:05:38.948 06:01:32 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.948 06:01:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.948 ************************************ 00:05:38.948 END TEST event_scheduler 00:05:38.948 ************************************ 00:05:38.948 06:01:32 event -- common/autotest_common.sh@1142 -- # return 0 00:05:38.949 06:01:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.949 06:01:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.949 06:01:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.949 06:01:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.949 06:01:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.949 ************************************ 00:05:38.949 START TEST app_repeat 00:05:38.949 ************************************ 00:05:38.949 06:01:32 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1607292 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1607292' 00:05:38.949 Process app_repeat pid: 1607292 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.949 spdk_app_start Round 0 00:05:38.949 06:01:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1607292 /var/tmp/spdk-nbd.sock 00:05:38.949 06:01:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1607292 ']' 00:05:38.949 06:01:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.949 06:01:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.949 06:01:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.949 06:01:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.949 06:01:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.949 [2024-07-23 06:01:32.129690] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:38.949 [2024-07-23 06:01:32.129750] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607292 ] 00:05:38.949 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.949 [2024-07-23 06:01:32.161274] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:38.949 [2024-07-23 06:01:32.189043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.949 [2024-07-23 06:01:32.280082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.949 [2024-07-23 06:01:32.280086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.206 06:01:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.206 06:01:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:39.206 06:01:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.463 Malloc0 00:05:39.463 06:01:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.722 Malloc1 00:05:39.722 06:01:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.722 06:01:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.979 /dev/nbd0 00:05:39.979 06:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.979 06:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.979 1+0 records in 00:05:39.979 1+0 records out 00:05:39.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174405 s, 23.5 MB/s 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.979 06:01:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:39.979 06:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.979 06:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.979 06:01:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.236 /dev/nbd1 00:05:40.236 06:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.236 06:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.236 1+0 records in 00:05:40.236 1+0 records out 00:05:40.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195634 s, 20.9 MB/s 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.236 06:01:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.236 06:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.236 06:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.236 06:01:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.236 06:01:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.236 06:01:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.493 { 00:05:40.493 "nbd_device": "/dev/nbd0", 00:05:40.493 "bdev_name": "Malloc0" 00:05:40.493 }, 00:05:40.493 { 00:05:40.493 "nbd_device": "/dev/nbd1", 00:05:40.493 "bdev_name": "Malloc1" 00:05:40.493 } 00:05:40.493 ]' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.493 { 00:05:40.493 "nbd_device": "/dev/nbd0", 00:05:40.493 "bdev_name": "Malloc0" 00:05:40.493 }, 00:05:40.493 { 00:05:40.493 "nbd_device": "/dev/nbd1", 00:05:40.493 "bdev_name": "Malloc1" 00:05:40.493 } 00:05:40.493 ]' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.493 /dev/nbd1' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.493 /dev/nbd1' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.493 256+0 records in 00:05:40.493 256+0 records out 00:05:40.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491076 s, 214 MB/s 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.493 256+0 records in 00:05:40.493 256+0 records out 00:05:40.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239059 s, 43.9 MB/s 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.493 256+0 records in 00:05:40.493 256+0 records out 00:05:40.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256185 s, 40.9 MB/s 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.493 06:01:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.494 06:01:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.752 06:01:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.318 06:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.575 06:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.575 06:01:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.575 06:01:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.575 06:01:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.575 06:01:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.575 06:01:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.833 06:01:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.833 [2024-07-23 06:01:35.147960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.092 [2024-07-23 06:01:35.239873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.092 [2024-07-23 06:01:35.239877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.092 [2024-07-23 06:01:35.301308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.092 [2024-07-23 06:01:35.301384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.616 06:01:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.616 06:01:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.616 spdk_app_start Round 1 00:05:44.616 06:01:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1607292 /var/tmp/spdk-nbd.sock 00:05:44.616 06:01:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1607292 ']' 00:05:44.616 06:01:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.616 06:01:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.616 06:01:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.616 06:01:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.616 06:01:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.874 06:01:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.874 06:01:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:44.874 06:01:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.131 Malloc0 00:05:45.131 06:01:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.390 Malloc1 00:05:45.390 06:01:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.390 06:01:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.648 /dev/nbd0 00:05:45.648 06:01:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.648 06:01:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.648 1+0 records in 00:05:45.648 1+0 records out 00:05:45.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159626 s, 25.7 MB/s 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.648 06:01:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:45.648 06:01:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.648 06:01:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.648 06:01:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.906 /dev/nbd1 00:05:45.906 06:01:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.906 06:01:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.906 06:01:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:45.906 06:01:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:45.906 06:01:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.906 06:01:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.906 06:01:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.907 1+0 records in 00:05:45.907 1+0 records out 00:05:45.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020155 s, 20.3 MB/s 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.907 06:01:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:45.907 06:01:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.907 06:01:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.907 06:01:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.907 06:01:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.907 06:01:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.164 06:01:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.164 { 00:05:46.164 "nbd_device": "/dev/nbd0", 00:05:46.164 "bdev_name": "Malloc0" 00:05:46.164 }, 00:05:46.164 { 00:05:46.164 "nbd_device": "/dev/nbd1", 00:05:46.164 "bdev_name": "Malloc1" 00:05:46.164 } 00:05:46.164 ]' 00:05:46.164 06:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.164 { 00:05:46.164 "nbd_device": "/dev/nbd0", 00:05:46.164 "bdev_name": "Malloc0" 00:05:46.164 }, 00:05:46.164 { 00:05:46.164 "nbd_device": "/dev/nbd1", 00:05:46.164 "bdev_name": "Malloc1" 00:05:46.164 } 00:05:46.164 ]' 00:05:46.164 06:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.422 /dev/nbd1' 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.422 /dev/nbd1' 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.422 256+0 records in 00:05:46.422 256+0 records out 00:05:46.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465446 s, 225 MB/s 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.422 256+0 records in 00:05:46.422 256+0 records out 00:05:46.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243407 s, 43.1 MB/s 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.422 256+0 records in 00:05:46.422 256+0 records out 00:05:46.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257746 s, 40.7 MB/s 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.422 06:01:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.681 06:01:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.954 06:01:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.954 06:01:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.954 06:01:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.954 06:01:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.954 06:01:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.955 06:01:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.955 06:01:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.955 06:01:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.955 06:01:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.955 06:01:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.955 06:01:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.213 06:01:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.213 06:01:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.471 06:01:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.729 [2024-07-23 06:01:40.926521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.729 [2024-07-23 06:01:41.017040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.729 [2024-07-23 06:01:41.017045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.987 [2024-07-23 06:01:41.076694] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.987 [2024-07-23 06:01:41.076747] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.517 06:01:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.517 06:01:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.517 spdk_app_start Round 2 00:05:50.517 06:01:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1607292 /var/tmp/spdk-nbd.sock 00:05:50.517 06:01:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1607292 ']' 00:05:50.517 06:01:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.517 06:01:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.517 06:01:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.517 06:01:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.517 06:01:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.774 06:01:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.774 06:01:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:50.774 06:01:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.032 Malloc0 00:05:51.032 06:01:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.290 Malloc1 00:05:51.290 06:01:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.290 06:01:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.548 /dev/nbd0 00:05:51.548 06:01:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.548 06:01:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.548 1+0 records in 00:05:51.548 1+0 records out 00:05:51.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154736 s, 26.5 MB/s 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:51.548 06:01:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:51.548 06:01:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.548 06:01:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.548 06:01:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.806 /dev/nbd1 00:05:51.806 06:01:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.806 06:01:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.806 1+0 records in 00:05:51.806 1+0 records out 00:05:51.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201926 s, 20.3 MB/s 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:51.806 06:01:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.806 06:01:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:51.806 06:01:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:51.806 06:01:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.806 06:01:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.806 06:01:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.806 06:01:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.806 06:01:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.064 { 00:05:52.064 "nbd_device": "/dev/nbd0", 00:05:52.064 "bdev_name": "Malloc0" 00:05:52.064 }, 00:05:52.064 { 00:05:52.064 "nbd_device": "/dev/nbd1", 00:05:52.064 "bdev_name": "Malloc1" 00:05:52.064 } 00:05:52.064 ]' 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.064 { 00:05:52.064 "nbd_device": "/dev/nbd0", 00:05:52.064 "bdev_name": "Malloc0" 00:05:52.064 }, 00:05:52.064 { 00:05:52.064 "nbd_device": "/dev/nbd1", 00:05:52.064 "bdev_name": "Malloc1" 00:05:52.064 } 00:05:52.064 ]' 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.064 /dev/nbd1' 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.064 /dev/nbd1' 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.064 256+0 records in 00:05:52.064 256+0 records out 00:05:52.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413875 s, 253 MB/s 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.064 256+0 records in 00:05:52.064 256+0 records out 00:05:52.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242825 s, 43.2 MB/s 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.064 256+0 records in 00:05:52.064 256+0 records out 00:05:52.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258475 s, 40.6 MB/s 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.064 06:01:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.065 06:01:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.322 06:01:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.580 06:01:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.838 06:01:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.838 06:01:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.838 06:01:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.838 06:01:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.095 06:01:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.095 06:01:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.095 06:01:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.095 06:01:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.095 06:01:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.095 06:01:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.095 06:01:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.095 06:01:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.095 06:01:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.353 06:01:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.353 [2024-07-23 06:01:46.671991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.611 [2024-07-23 06:01:46.763597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.611 [2024-07-23 06:01:46.763602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.611 [2024-07-23 06:01:46.822525] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.611 [2024-07-23 06:01:46.822621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.137 06:01:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1607292 /var/tmp/spdk-nbd.sock 00:05:56.137 06:01:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1607292 ']' 00:05:56.137 06:01:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.137 06:01:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.137 06:01:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.137 06:01:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.137 06:01:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:56.395 06:01:49 event.app_repeat -- event/event.sh@39 -- # killprocess 1607292 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1607292 ']' 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1607292 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1607292 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1607292' 00:05:56.395 killing process with pid 1607292 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1607292 00:05:56.395 06:01:49 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1607292 00:05:56.653 spdk_app_start is called in Round 0. 00:05:56.653 Shutdown signal received, stop current app iteration 00:05:56.653 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 reinitialization... 00:05:56.653 spdk_app_start is called in Round 1. 00:05:56.653 Shutdown signal received, stop current app iteration 00:05:56.653 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 reinitialization... 00:05:56.653 spdk_app_start is called in Round 2. 00:05:56.653 Shutdown signal received, stop current app iteration 00:05:56.653 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 reinitialization... 00:05:56.653 spdk_app_start is called in Round 3. 00:05:56.653 Shutdown signal received, stop current app iteration 00:05:56.653 06:01:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.653 06:01:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:56.653 00:05:56.653 real 0m17.820s 00:05:56.653 user 0m38.849s 00:05:56.653 sys 0m3.172s 00:05:56.653 06:01:49 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.653 06:01:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.653 ************************************ 00:05:56.653 END TEST app_repeat 00:05:56.653 ************************************ 00:05:56.653 06:01:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:56.653 06:01:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.653 06:01:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:56.653 06:01:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.653 06:01:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.653 06:01:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.653 ************************************ 00:05:56.653 START TEST cpu_locks 00:05:56.653 ************************************ 00:05:56.653 06:01:49 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:56.911 * Looking for test storage... 00:05:56.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:56.911 06:01:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.911 06:01:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.911 06:01:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.911 06:01:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.911 06:01:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.911 06:01:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.911 06:01:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.911 ************************************ 00:05:56.911 START TEST default_locks 00:05:56.911 ************************************ 00:05:56.911 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:56.911 06:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1609570 00:05:56.911 06:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.911 06:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1609570 00:05:56.911 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1609570 ']' 00:05:56.911 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.911 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.912 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.912 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.912 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.912 [2024-07-23 06:01:50.108624] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:56.912 [2024-07-23 06:01:50.108720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609570 ] 00:05:56.912 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.912 [2024-07-23 06:01:50.144475] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.912 [2024-07-23 06:01:50.173491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.170 [2024-07-23 06:01:50.265519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.170 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.170 06:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:57.170 06:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1609570 00:05:57.170 06:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1609570 00:05:57.170 06:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.736 lslocks: write error 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1609570 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1609570 ']' 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1609570 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1609570 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1609570' 00:05:57.736 killing process with pid 1609570 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1609570 00:05:57.736 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1609570 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1609570 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1609570 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1609570 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1609570 ']' 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1609570) - No such process 00:05:58.302 ERROR: process (pid: 1609570) is no longer running 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.302 00:05:58.302 real 0m1.380s 00:05:58.302 user 0m1.331s 00:05:58.302 sys 0m0.572s 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.302 06:01:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.302 ************************************ 00:05:58.302 END TEST default_locks 00:05:58.302 ************************************ 00:05:58.302 06:01:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:58.302 06:01:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:58.302 06:01:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.302 06:01:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.302 06:01:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.302 ************************************ 00:05:58.302 START TEST default_locks_via_rpc 00:05:58.302 ************************************ 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1609853 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1609853 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1609853 ']' 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.302 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.302 [2024-07-23 06:01:51.528554] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:58.302 [2024-07-23 06:01:51.528661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609853 ] 00:05:58.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.302 [2024-07-23 06:01:51.561457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.302 [2024-07-23 06:01:51.587151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.561 [2024-07-23 06:01:51.676189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.818 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.818 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.818 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.818 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1609853 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1609853 00:05:58.819 06:01:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1609853 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1609853 ']' 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1609853 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1609853 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1609853' 00:05:59.076 killing process with pid 1609853 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1609853 00:05:59.076 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1609853 00:05:59.335 00:05:59.335 real 0m1.138s 00:05:59.335 user 0m1.073s 00:05:59.335 sys 0m0.529s 00:05:59.335 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.335 06:01:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.335 ************************************ 00:05:59.335 END TEST default_locks_via_rpc 00:05:59.335 ************************************ 00:05:59.335 06:01:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.335 06:01:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:59.335 06:01:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.335 06:01:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.335 06:01:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.335 ************************************ 00:05:59.335 START TEST non_locking_app_on_locked_coremask 00:05:59.335 ************************************ 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1610014 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1610014 /var/tmp/spdk.sock 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1610014 ']' 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.335 06:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.595 [2024-07-23 06:01:52.723225] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:59.595 [2024-07-23 06:01:52.723314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610014 ] 00:05:59.595 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.595 [2024-07-23 06:01:52.754349] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.595 [2024-07-23 06:01:52.785595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.595 [2024-07-23 06:01:52.874704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1610023 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1610023 /var/tmp/spdk2.sock 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1610023 ']' 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.853 06:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.853 [2024-07-23 06:01:53.183626] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:59.853 [2024-07-23 06:01:53.183709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610023 ] 00:06:00.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.131 [2024-07-23 06:01:53.217380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.131 [2024-07-23 06:01:53.281679] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.131 [2024-07-23 06:01:53.281710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.131 [2024-07-23 06:01:53.465107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.070 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.070 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.070 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1610014 00:06:01.070 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1610014 00:06:01.070 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.636 lslocks: write error 00:06:01.636 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1610014 00:06:01.636 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1610014 ']' 00:06:01.636 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1610014 00:06:01.636 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:01.636 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.636 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1610014 00:06:01.636 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.636 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.637 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1610014' 00:06:01.637 killing process with pid 1610014 00:06:01.637 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1610014 00:06:01.637 06:01:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1610014 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1610023 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1610023 ']' 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1610023 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1610023 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1610023' 00:06:02.585 killing process with pid 1610023 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1610023 00:06:02.585 06:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1610023 00:06:02.843 00:06:02.843 real 0m3.353s 00:06:02.843 user 0m3.473s 00:06:02.843 sys 0m1.094s 00:06:02.843 06:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.843 06:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.843 ************************************ 00:06:02.843 END TEST non_locking_app_on_locked_coremask 00:06:02.843 ************************************ 00:06:02.843 06:01:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:02.843 06:01:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:02.843 06:01:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.843 06:01:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.843 06:01:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.843 ************************************ 00:06:02.843 START TEST locking_app_on_unlocked_coremask 00:06:02.843 ************************************ 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1610455 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1610455 /var/tmp/spdk.sock 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1610455 ']' 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.843 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.843 [2024-07-23 06:01:56.127798] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:02.843 [2024-07-23 06:01:56.127882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610455 ] 00:06:02.843 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.843 [2024-07-23 06:01:56.159002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.101 [2024-07-23 06:01:56.188751] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.101 [2024-07-23 06:01:56.188781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.101 [2024-07-23 06:01:56.276287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1610458 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1610458 /var/tmp/spdk2.sock 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1610458 ']' 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.360 06:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.360 [2024-07-23 06:01:56.590395] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:03.360 [2024-07-23 06:01:56.590466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610458 ] 00:06:03.360 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.360 [2024-07-23 06:01:56.624451] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.360 [2024-07-23 06:01:56.675411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.618 [2024-07-23 06:01:56.856685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.552 06:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.552 06:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:04.553 06:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1610458 00:06:04.553 06:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1610458 00:06:04.553 06:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.810 lslocks: write error 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1610455 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1610455 ']' 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1610455 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1610455 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1610455' 00:06:04.810 killing process with pid 1610455 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1610455 00:06:04.810 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1610455 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1610458 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1610458 ']' 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1610458 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1610458 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1610458' 00:06:05.743 killing process with pid 1610458 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1610458 00:06:05.743 06:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1610458 00:06:06.001 00:06:06.001 real 0m3.263s 00:06:06.001 user 0m3.396s 00:06:06.001 sys 0m1.093s 00:06:06.001 06:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.001 06:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.001 ************************************ 00:06:06.001 END TEST locking_app_on_unlocked_coremask 00:06:06.001 ************************************ 00:06:06.259 06:01:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:06.259 06:01:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:06.259 06:01:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.259 06:01:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.259 06:01:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.259 ************************************ 00:06:06.259 START TEST locking_app_on_locked_coremask 00:06:06.259 ************************************ 00:06:06.259 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:06.259 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1610889 00:06:06.259 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.259 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1610889 /var/tmp/spdk.sock 00:06:06.259 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1610889 ']' 00:06:06.260 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.260 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.260 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.260 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.260 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.260 [2024-07-23 06:01:59.442266] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:06.260 [2024-07-23 06:01:59.442353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610889 ] 00:06:06.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.260 [2024-07-23 06:01:59.473014] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.260 [2024-07-23 06:01:59.504567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.260 [2024-07-23 06:01:59.593392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1610900 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1610900 /var/tmp/spdk2.sock 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1610900 /var/tmp/spdk2.sock 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1610900 /var/tmp/spdk2.sock 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1610900 ']' 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.518 06:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.776 [2024-07-23 06:01:59.903877] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:06.776 [2024-07-23 06:01:59.903959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610900 ] 00:06:06.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.776 [2024-07-23 06:01:59.937377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.776 [2024-07-23 06:02:00.000709] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1610889 has claimed it. 00:06:06.776 [2024-07-23 06:02:00.000766] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1610900) - No such process 00:06:07.341 ERROR: process (pid: 1610900) is no longer running 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1610889 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1610889 00:06:07.341 06:02:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.907 lslocks: write error 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1610889 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1610889 ']' 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1610889 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1610889 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1610889' 00:06:07.907 killing process with pid 1610889 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1610889 00:06:07.907 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1610889 00:06:08.164 00:06:08.164 real 0m2.097s 00:06:08.164 user 0m2.241s 00:06:08.164 sys 0m0.678s 00:06:08.164 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.164 06:02:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.164 ************************************ 00:06:08.164 END TEST locking_app_on_locked_coremask 00:06:08.164 ************************************ 00:06:08.421 06:02:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.421 06:02:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.421 06:02:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.421 06:02:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.421 06:02:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.421 ************************************ 00:06:08.421 START TEST locking_overlapped_coremask 00:06:08.421 ************************************ 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1611185 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1611185 /var/tmp/spdk.sock 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1611185 ']' 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.421 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.421 [2024-07-23 06:02:01.586239] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:08.421 [2024-07-23 06:02:01.586331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611185 ] 00:06:08.421 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.421 [2024-07-23 06:02:01.618708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.421 [2024-07-23 06:02:01.644433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.421 [2024-07-23 06:02:01.733041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.421 [2024-07-23 06:02:01.733105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.421 [2024-07-23 06:02:01.733108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1611198 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1611198 /var/tmp/spdk2.sock 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1611198 /var/tmp/spdk2.sock 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1611198 /var/tmp/spdk2.sock 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1611198 ']' 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.679 06:02:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.937 [2024-07-23 06:02:02.045123] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:08.937 [2024-07-23 06:02:02.045205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611198 ] 00:06:08.937 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.937 [2024-07-23 06:02:02.080654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.937 [2024-07-23 06:02:02.136725] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1611185 has claimed it. 00:06:08.937 [2024-07-23 06:02:02.136781] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1611198) - No such process 00:06:09.503 ERROR: process (pid: 1611198) is no longer running 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1611185 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1611185 ']' 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1611185 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611185 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611185' 00:06:09.503 killing process with pid 1611185 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1611185 00:06:09.503 06:02:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1611185 00:06:10.073 00:06:10.073 real 0m1.621s 00:06:10.073 user 0m4.364s 00:06:10.073 sys 0m0.467s 00:06:10.073 06:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.073 06:02:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.073 ************************************ 00:06:10.073 END TEST locking_overlapped_coremask 00:06:10.073 ************************************ 00:06:10.073 06:02:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.073 06:02:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.073 06:02:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.073 06:02:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.073 06:02:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.073 ************************************ 00:06:10.073 START TEST locking_overlapped_coremask_via_rpc 00:06:10.073 ************************************ 00:06:10.073 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:10.073 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1611362 00:06:10.074 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.074 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1611362 /var/tmp/spdk.sock 00:06:10.074 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1611362 ']' 00:06:10.074 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.074 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.074 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.074 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.074 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.074 [2024-07-23 06:02:03.261639] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:10.074 [2024-07-23 06:02:03.261732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611362 ] 00:06:10.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.074 [2024-07-23 06:02:03.292699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.074 [2024-07-23 06:02:03.323949] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.074 [2024-07-23 06:02:03.323978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.074 [2024-07-23 06:02:03.413904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.074 [2024-07-23 06:02:03.413973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.074 [2024-07-23 06:02:03.413976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1611371 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1611371 /var/tmp/spdk2.sock 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1611371 ']' 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.336 06:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.594 [2024-07-23 06:02:03.722149] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:10.594 [2024-07-23 06:02:03.722240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611371 ] 00:06:10.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.594 [2024-07-23 06:02:03.761703] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.594 [2024-07-23 06:02:03.818139] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.594 [2024-07-23 06:02:03.818166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.852 [2024-07-23 06:02:04.000749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.852 [2024-07-23 06:02:04.000809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.852 [2024-07-23 06:02:04.000811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.417 [2024-07-23 06:02:04.675706] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1611362 has claimed it. 00:06:11.417 request: 00:06:11.417 { 00:06:11.417 "method": "framework_enable_cpumask_locks", 00:06:11.417 "req_id": 1 00:06:11.417 } 00:06:11.417 Got JSON-RPC error response 00:06:11.417 response: 00:06:11.417 { 00:06:11.417 "code": -32603, 00:06:11.417 "message": "Failed to claim CPU core: 2" 00:06:11.417 } 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1611362 /var/tmp/spdk.sock 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1611362 ']' 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.417 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1611371 /var/tmp/spdk2.sock 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1611371 ']' 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.675 06:02:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.933 06:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.933 06:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.933 06:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.933 06:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.933 06:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.933 06:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.933 00:06:11.933 real 0m1.972s 00:06:11.933 user 0m0.999s 00:06:11.933 sys 0m0.195s 00:06:11.933 06:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.933 06:02:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.933 ************************************ 00:06:11.933 END TEST locking_overlapped_coremask_via_rpc 00:06:11.933 ************************************ 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:11.933 06:02:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.933 06:02:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1611362 ]] 00:06:11.933 06:02:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1611362 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1611362 ']' 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1611362 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611362 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611362' 00:06:11.933 killing process with pid 1611362 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1611362 00:06:11.933 06:02:05 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1611362 00:06:12.507 06:02:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1611371 ]] 00:06:12.507 06:02:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1611371 00:06:12.507 06:02:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1611371 ']' 00:06:12.507 06:02:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1611371 00:06:12.507 06:02:05 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:12.508 06:02:05 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.508 06:02:05 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611371 00:06:12.508 06:02:05 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:12.508 06:02:05 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:12.508 06:02:05 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611371' 00:06:12.508 killing process with pid 1611371 00:06:12.508 06:02:05 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1611371 00:06:12.508 06:02:05 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1611371 00:06:12.773 06:02:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.773 06:02:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:12.773 06:02:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1611362 ]] 00:06:12.773 06:02:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1611362 00:06:12.773 06:02:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1611362 ']' 00:06:12.773 06:02:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1611362 00:06:12.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1611362) - No such process 00:06:12.773 06:02:06 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1611362 is not found' 00:06:12.773 Process with pid 1611362 is not found 00:06:12.773 06:02:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1611371 ]] 00:06:12.773 06:02:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1611371 00:06:12.773 06:02:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1611371 ']' 00:06:12.773 06:02:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1611371 00:06:12.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1611371) - No such process 00:06:12.773 06:02:06 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1611371 is not found' 00:06:12.773 Process with pid 1611371 is not found 00:06:12.773 06:02:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.773 00:06:12.773 real 0m16.102s 00:06:12.773 user 0m27.699s 00:06:12.773 sys 0m5.525s 00:06:12.773 06:02:06 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.773 06:02:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.773 ************************************ 00:06:12.773 END TEST cpu_locks 00:06:12.773 ************************************ 00:06:12.773 06:02:06 event -- common/autotest_common.sh@1142 -- # return 0 00:06:12.773 00:06:12.773 real 0m40.313s 00:06:12.773 user 1m15.836s 00:06:12.773 sys 0m9.504s 00:06:12.773 06:02:06 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.773 06:02:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.773 ************************************ 00:06:12.773 END TEST event 00:06:12.773 ************************************ 00:06:13.031 06:02:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.031 06:02:06 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:13.031 06:02:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.031 06:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.031 06:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:13.031 ************************************ 00:06:13.031 START TEST thread 00:06:13.031 ************************************ 00:06:13.031 06:02:06 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:13.031 * Looking for test storage... 00:06:13.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:13.031 06:02:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.031 06:02:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:13.031 06:02:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.031 06:02:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.031 ************************************ 00:06:13.031 START TEST thread_poller_perf 00:06:13.031 ************************************ 00:06:13.031 06:02:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.031 [2024-07-23 06:02:06.223742] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:13.031 [2024-07-23 06:02:06.223809] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611861 ] 00:06:13.031 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.031 [2024-07-23 06:02:06.256152] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.031 [2024-07-23 06:02:06.282941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.031 [2024-07-23 06:02:06.371052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.031 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:14.402 ====================================== 00:06:14.402 busy:2712706838 (cyc) 00:06:14.402 total_run_count: 293000 00:06:14.402 tsc_hz: 2700000000 (cyc) 00:06:14.402 ====================================== 00:06:14.402 poller_cost: 9258 (cyc), 3428 (nsec) 00:06:14.402 00:06:14.402 real 0m1.253s 00:06:14.402 user 0m1.169s 00:06:14.402 sys 0m0.080s 00:06:14.402 06:02:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.402 06:02:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.402 ************************************ 00:06:14.402 END TEST thread_poller_perf 00:06:14.402 ************************************ 00:06:14.402 06:02:07 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:14.402 06:02:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.402 06:02:07 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:14.402 06:02:07 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.402 06:02:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.402 ************************************ 00:06:14.402 START TEST thread_poller_perf 00:06:14.402 ************************************ 00:06:14.402 06:02:07 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.402 [2024-07-23 06:02:07.527009] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:14.402 [2024-07-23 06:02:07.527074] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612014 ] 00:06:14.402 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.402 [2024-07-23 06:02:07.559074] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.402 [2024-07-23 06:02:07.590915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.403 [2024-07-23 06:02:07.680635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.403 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:15.781 ====================================== 00:06:15.781 busy:2702770188 (cyc) 00:06:15.781 total_run_count: 3694000 00:06:15.781 tsc_hz: 2700000000 (cyc) 00:06:15.781 ====================================== 00:06:15.781 poller_cost: 731 (cyc), 270 (nsec) 00:06:15.781 00:06:15.781 real 0m1.252s 00:06:15.781 user 0m1.166s 00:06:15.781 sys 0m0.079s 00:06:15.781 06:02:08 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.781 06:02:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.781 ************************************ 00:06:15.781 END TEST thread_poller_perf 00:06:15.781 ************************************ 00:06:15.781 06:02:08 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:15.781 06:02:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:15.781 00:06:15.781 real 0m2.646s 00:06:15.781 user 0m2.390s 00:06:15.781 sys 0m0.256s 00:06:15.781 06:02:08 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.781 06:02:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.781 ************************************ 00:06:15.781 END TEST thread 00:06:15.781 ************************************ 00:06:15.781 06:02:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.781 06:02:08 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:15.782 06:02:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.782 06:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.782 06:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:15.782 ************************************ 00:06:15.782 START TEST accel 00:06:15.782 ************************************ 00:06:15.782 06:02:08 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:15.782 * Looking for test storage... 00:06:15.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:15.782 06:02:08 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:15.782 06:02:08 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:15.782 06:02:08 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.782 06:02:08 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1612210 00:06:15.782 06:02:08 accel -- accel/accel.sh@63 -- # waitforlisten 1612210 00:06:15.782 06:02:08 accel -- common/autotest_common.sh@829 -- # '[' -z 1612210 ']' 00:06:15.782 06:02:08 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.782 06:02:08 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:15.782 06:02:08 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.782 06:02:08 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:15.782 06:02:08 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.782 06:02:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.782 06:02:08 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.782 06:02:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.782 06:02:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.782 06:02:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.782 06:02:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.782 06:02:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.782 06:02:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:15.782 06:02:08 accel -- accel/accel.sh@41 -- # jq -r . 00:06:15.782 [2024-07-23 06:02:08.941232] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:15.782 [2024-07-23 06:02:08.941309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612210 ] 00:06:15.782 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.782 [2024-07-23 06:02:08.972809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.782 [2024-07-23 06:02:09.002693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.782 [2024-07-23 06:02:09.093547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.039 06:02:09 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.039 06:02:09 accel -- common/autotest_common.sh@862 -- # return 0 00:06:16.039 06:02:09 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:16.039 06:02:09 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:16.039 06:02:09 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:16.039 06:02:09 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:16.039 06:02:09 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:16.039 06:02:09 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:16.039 06:02:09 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.039 06:02:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.039 06:02:09 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:16.039 06:02:09 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.297 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.297 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.297 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.298 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.298 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.298 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.298 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.298 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.298 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.298 06:02:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.298 06:02:09 accel -- accel/accel.sh@72 -- # IFS== 00:06:16.298 06:02:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:16.298 06:02:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.298 06:02:09 accel -- accel/accel.sh@75 -- # killprocess 1612210 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@948 -- # '[' -z 1612210 ']' 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@952 -- # kill -0 1612210 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@953 -- # uname 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1612210 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1612210' 00:06:16.298 killing process with pid 1612210 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@967 -- # kill 1612210 00:06:16.298 06:02:09 accel -- common/autotest_common.sh@972 -- # wait 1612210 00:06:16.557 06:02:09 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:16.557 06:02:09 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:16.557 06:02:09 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:16.557 06:02:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.557 06:02:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.557 06:02:09 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:16.557 06:02:09 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:16.557 06:02:09 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.557 06:02:09 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:16.557 06:02:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.557 06:02:09 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:16.557 06:02:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.557 06:02:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.557 06:02:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.816 ************************************ 00:06:16.816 START TEST accel_missing_filename 00:06:16.816 ************************************ 00:06:16.816 06:02:09 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:16.816 06:02:09 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:16.816 06:02:09 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:16.816 06:02:09 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:16.816 06:02:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.816 06:02:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:16.816 06:02:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.816 06:02:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:16.816 06:02:09 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:16.816 [2024-07-23 06:02:09.933410] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:16.816 [2024-07-23 06:02:09.933476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612382 ] 00:06:16.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.816 [2024-07-23 06:02:09.965382] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.816 [2024-07-23 06:02:09.996994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.816 [2024-07-23 06:02:10.095547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.816 [2024-07-23 06:02:10.157271] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.074 [2024-07-23 06:02:10.245807] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:17.074 A filename is required. 00:06:17.074 06:02:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:17.074 06:02:10 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.074 06:02:10 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:17.074 06:02:10 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.074 06:02:10 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:17.074 06:02:10 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.074 00:06:17.074 real 0m0.416s 00:06:17.074 user 0m0.298s 00:06:17.074 sys 0m0.152s 00:06:17.074 06:02:10 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.074 06:02:10 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:17.075 ************************************ 00:06:17.075 END TEST accel_missing_filename 00:06:17.075 ************************************ 00:06:17.075 06:02:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.075 06:02:10 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.075 06:02:10 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:17.075 06:02:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.075 06:02:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.075 ************************************ 00:06:17.075 START TEST accel_compress_verify 00:06:17.075 ************************************ 00:06:17.075 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.075 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:17.075 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.075 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.075 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.075 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.075 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.075 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:17.075 06:02:10 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:17.075 [2024-07-23 06:02:10.403848] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:17.075 [2024-07-23 06:02:10.403911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612404 ] 00:06:17.333 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.333 [2024-07-23 06:02:10.435476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.333 [2024-07-23 06:02:10.467670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.333 [2024-07-23 06:02:10.557932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.333 [2024-07-23 06:02:10.619698] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.592 [2024-07-23 06:02:10.698032] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:17.592 00:06:17.592 Compression does not support the verify option, aborting. 00:06:17.592 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:17.592 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.592 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:17.592 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.592 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:17.592 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.592 00:06:17.592 real 0m0.392s 00:06:17.592 user 0m0.278s 00:06:17.592 sys 0m0.148s 00:06:17.592 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.592 06:02:10 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:17.592 ************************************ 00:06:17.592 END TEST accel_compress_verify 00:06:17.592 ************************************ 00:06:17.592 06:02:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.592 06:02:10 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:17.592 06:02:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.592 06:02:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.592 06:02:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.592 ************************************ 00:06:17.592 START TEST accel_wrong_workload 00:06:17.592 ************************************ 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:17.592 06:02:10 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:17.592 Unsupported workload type: foobar 00:06:17.592 [2024-07-23 06:02:10.839147] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:17.592 accel_perf options: 00:06:17.592 [-h help message] 00:06:17.592 [-q queue depth per core] 00:06:17.592 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:17.592 [-T number of threads per core 00:06:17.592 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:17.592 [-t time in seconds] 00:06:17.592 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:17.592 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:17.592 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:17.592 [-l for compress/decompress workloads, name of uncompressed input file 00:06:17.592 [-S for crc32c workload, use this seed value (default 0) 00:06:17.592 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:17.592 [-f for fill workload, use this BYTE value (default 255) 00:06:17.592 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:17.592 [-y verify result if this switch is on] 00:06:17.592 [-a tasks to allocate per core (default: same value as -q)] 00:06:17.592 Can be used to spread operations across a wider range of memory. 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.592 00:06:17.592 real 0m0.021s 00:06:17.592 user 0m0.008s 00:06:17.592 sys 0m0.013s 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.592 06:02:10 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:17.592 ************************************ 00:06:17.592 END TEST accel_wrong_workload 00:06:17.592 ************************************ 00:06:17.592 Error: writing output failed: Broken pipe 00:06:17.592 06:02:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.592 06:02:10 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:17.592 06:02:10 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:17.592 06:02:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.592 06:02:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.592 ************************************ 00:06:17.592 START TEST accel_negative_buffers 00:06:17.592 ************************************ 00:06:17.592 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:17.592 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:17.592 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:17.593 06:02:10 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:17.593 -x option must be non-negative. 00:06:17.593 [2024-07-23 06:02:10.910525] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:17.593 accel_perf options: 00:06:17.593 [-h help message] 00:06:17.593 [-q queue depth per core] 00:06:17.593 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:17.593 [-T number of threads per core 00:06:17.593 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:17.593 [-t time in seconds] 00:06:17.593 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:17.593 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:17.593 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:17.593 [-l for compress/decompress workloads, name of uncompressed input file 00:06:17.593 [-S for crc32c workload, use this seed value (default 0) 00:06:17.593 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:17.593 [-f for fill workload, use this BYTE value (default 255) 00:06:17.593 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:17.593 [-y verify result if this switch is on] 00:06:17.593 [-a tasks to allocate per core (default: same value as -q)] 00:06:17.593 Can be used to spread operations across a wider range of memory. 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.593 00:06:17.593 real 0m0.024s 00:06:17.593 user 0m0.014s 00:06:17.593 sys 0m0.011s 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.593 06:02:10 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:17.593 ************************************ 00:06:17.593 END TEST accel_negative_buffers 00:06:17.593 ************************************ 00:06:17.593 Error: writing output failed: Broken pipe 00:06:17.593 06:02:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.593 06:02:10 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:17.593 06:02:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:17.593 06:02:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.593 06:02:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.854 ************************************ 00:06:17.854 START TEST accel_crc32c 00:06:17.854 ************************************ 00:06:17.854 06:02:10 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:17.854 06:02:10 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:17.854 [2024-07-23 06:02:10.971213] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:17.854 [2024-07-23 06:02:10.971283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612594 ] 00:06:17.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.854 [2024-07-23 06:02:11.004689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.854 [2024-07-23 06:02:11.034518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.854 [2024-07-23 06:02:11.127535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.854 06:02:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:19.234 06:02:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.234 00:06:19.234 real 0m1.410s 00:06:19.234 user 0m1.267s 00:06:19.234 sys 0m0.145s 00:06:19.234 06:02:12 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.234 06:02:12 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:19.234 ************************************ 00:06:19.234 END TEST accel_crc32c 00:06:19.234 ************************************ 00:06:19.234 06:02:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.234 06:02:12 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:19.234 06:02:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:19.234 06:02:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.234 06:02:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.234 ************************************ 00:06:19.234 START TEST accel_crc32c_C2 00:06:19.234 ************************************ 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.234 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:19.234 [2024-07-23 06:02:12.426097] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:19.234 [2024-07-23 06:02:12.426159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612747 ] 00:06:19.234 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.234 [2024-07-23 06:02:12.458022] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.234 [2024-07-23 06:02:12.487809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.493 [2024-07-23 06:02:12.582283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.493 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.494 06:02:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.876 00:06:20.876 real 0m1.409s 00:06:20.876 user 0m1.262s 00:06:20.876 sys 0m0.148s 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.876 06:02:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:20.876 ************************************ 00:06:20.876 END TEST accel_crc32c_C2 00:06:20.876 ************************************ 00:06:20.876 06:02:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.876 06:02:13 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:20.876 06:02:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.876 06:02:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.876 06:02:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.876 ************************************ 00:06:20.876 START TEST accel_copy 00:06:20.876 ************************************ 00:06:20.876 06:02:13 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:20.876 06:02:13 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:20.876 [2024-07-23 06:02:13.880223] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:20.876 [2024-07-23 06:02:13.880286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612908 ] 00:06:20.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.876 [2024-07-23 06:02:13.912332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.876 [2024-07-23 06:02:13.942060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.876 [2024-07-23 06:02:14.036910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.876 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.877 06:02:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:22.260 06:02:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.260 00:06:22.260 real 0m1.397s 00:06:22.260 user 0m1.252s 00:06:22.260 sys 0m0.146s 00:06:22.260 06:02:15 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.260 06:02:15 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.260 ************************************ 00:06:22.260 END TEST accel_copy 00:06:22.260 ************************************ 00:06:22.260 06:02:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.260 06:02:15 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.260 06:02:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:22.260 06:02:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.260 06:02:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.260 ************************************ 00:06:22.260 START TEST accel_fill 00:06:22.260 ************************************ 00:06:22.260 06:02:15 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:22.260 [2024-07-23 06:02:15.319560] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:22.260 [2024-07-23 06:02:15.319666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613180 ] 00:06:22.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.260 [2024-07-23 06:02:15.351704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.260 [2024-07-23 06:02:15.383445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.260 [2024-07-23 06:02:15.476232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.260 06:02:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:23.642 06:02:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.642 00:06:23.642 real 0m1.411s 00:06:23.642 user 0m1.264s 00:06:23.642 sys 0m0.150s 00:06:23.642 06:02:16 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.642 06:02:16 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:23.642 ************************************ 00:06:23.642 END TEST accel_fill 00:06:23.642 ************************************ 00:06:23.642 06:02:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.642 06:02:16 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:23.642 06:02:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:23.642 06:02:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.642 06:02:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.642 ************************************ 00:06:23.642 START TEST accel_copy_crc32c 00:06:23.642 ************************************ 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:23.642 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:23.642 [2024-07-23 06:02:16.772174] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:23.642 [2024-07-23 06:02:16.772237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613333 ] 00:06:23.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.642 [2024-07-23 06:02:16.804219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.642 [2024-07-23 06:02:16.833997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.642 [2024-07-23 06:02:16.926646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.902 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.902 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.902 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.902 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.903 06:02:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.842 00:06:24.842 real 0m1.403s 00:06:24.842 user 0m1.269s 00:06:24.842 sys 0m0.137s 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.842 06:02:18 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:24.842 ************************************ 00:06:24.842 END TEST accel_copy_crc32c 00:06:24.842 ************************************ 00:06:24.842 06:02:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.842 06:02:18 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:24.842 06:02:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.842 06:02:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.842 06:02:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.101 ************************************ 00:06:25.101 START TEST accel_copy_crc32c_C2 00:06:25.101 ************************************ 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:25.101 [2024-07-23 06:02:18.215823] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:25.101 [2024-07-23 06:02:18.215885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613492 ] 00:06:25.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.101 [2024-07-23 06:02:18.248488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.101 [2024-07-23 06:02:18.278193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.101 [2024-07-23 06:02:18.373027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.101 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.360 06:02:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.298 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.299 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.299 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.299 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.299 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:26.299 06:02:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.299 00:06:26.299 real 0m1.408s 00:06:26.299 user 0m1.266s 00:06:26.299 sys 0m0.144s 00:06:26.299 06:02:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.299 06:02:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:26.299 ************************************ 00:06:26.299 END TEST accel_copy_crc32c_C2 00:06:26.299 ************************************ 00:06:26.299 06:02:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.299 06:02:19 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:26.299 06:02:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.299 06:02:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.299 06:02:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.558 ************************************ 00:06:26.558 START TEST accel_dualcast 00:06:26.558 ************************************ 00:06:26.558 06:02:19 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:26.558 [2024-07-23 06:02:19.666148] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:26.558 [2024-07-23 06:02:19.666209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613758 ] 00:06:26.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.558 [2024-07-23 06:02:19.699052] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.558 [2024-07-23 06:02:19.728750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.558 [2024-07-23 06:02:19.822152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.558 06:02:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:27.942 06:02:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.942 00:06:27.942 real 0m1.391s 00:06:27.943 user 0m1.255s 00:06:27.943 sys 0m0.137s 00:06:27.943 06:02:21 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.943 06:02:21 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:27.943 ************************************ 00:06:27.943 END TEST accel_dualcast 00:06:27.943 ************************************ 00:06:27.943 06:02:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.943 06:02:21 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:27.943 06:02:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:27.943 06:02:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.943 06:02:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.943 ************************************ 00:06:27.943 START TEST accel_compare 00:06:27.943 ************************************ 00:06:27.943 06:02:21 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:27.943 06:02:21 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:27.943 [2024-07-23 06:02:21.104007] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:27.943 [2024-07-23 06:02:21.104071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613927 ] 00:06:27.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.943 [2024-07-23 06:02:21.135981] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.943 [2024-07-23 06:02:21.167579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.943 [2024-07-23 06:02:21.259225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.204 06:02:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:29.584 06:02:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.584 00:06:29.584 real 0m1.410s 00:06:29.584 user 0m1.267s 00:06:29.584 sys 0m0.145s 00:06:29.584 06:02:22 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.584 06:02:22 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:29.584 ************************************ 00:06:29.584 END TEST accel_compare 00:06:29.584 ************************************ 00:06:29.584 06:02:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.584 06:02:22 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:29.584 06:02:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.584 06:02:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.584 06:02:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.584 ************************************ 00:06:29.584 START TEST accel_xor 00:06:29.584 ************************************ 00:06:29.584 06:02:22 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:29.584 [2024-07-23 06:02:22.555137] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:29.584 [2024-07-23 06:02:22.555202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614079 ] 00:06:29.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.584 [2024-07-23 06:02:22.587741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.584 [2024-07-23 06:02:22.617505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.584 [2024-07-23 06:02:22.710260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.584 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.585 06:02:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:30.965 06:02:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.965 00:06:30.965 real 0m1.408s 00:06:30.965 user 0m1.265s 00:06:30.965 sys 0m0.145s 00:06:30.965 06:02:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.965 06:02:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:30.965 ************************************ 00:06:30.965 END TEST accel_xor 00:06:30.965 ************************************ 00:06:30.965 06:02:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.965 06:02:23 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:30.965 06:02:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.965 06:02:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.965 06:02:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.965 ************************************ 00:06:30.965 START TEST accel_xor 00:06:30.965 ************************************ 00:06:30.966 06:02:23 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:30.966 06:02:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:30.966 [2024-07-23 06:02:24.009991] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:30.966 [2024-07-23 06:02:24.010055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614232 ] 00:06:30.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.966 [2024-07-23 06:02:24.042788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.966 [2024-07-23 06:02:24.072800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.966 [2024-07-23 06:02:24.170598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.966 06:02:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:32.348 06:02:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.348 00:06:32.348 real 0m1.409s 00:06:32.348 user 0m1.267s 00:06:32.348 sys 0m0.144s 00:06:32.348 06:02:25 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.348 06:02:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:32.348 ************************************ 00:06:32.348 END TEST accel_xor 00:06:32.348 ************************************ 00:06:32.348 06:02:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.348 06:02:25 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:32.348 06:02:25 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:32.348 06:02:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.348 06:02:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.348 ************************************ 00:06:32.348 START TEST accel_dif_verify 00:06:32.348 ************************************ 00:06:32.348 06:02:25 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:32.348 [2024-07-23 06:02:25.458385] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:32.348 [2024-07-23 06:02:25.458450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614504 ] 00:06:32.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.348 [2024-07-23 06:02:25.491024] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.348 [2024-07-23 06:02:25.520598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.348 [2024-07-23 06:02:25.610699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.348 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.349 06:02:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:33.731 06:02:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.731 00:06:33.731 real 0m1.403s 00:06:33.731 user 0m1.265s 00:06:33.731 sys 0m0.143s 00:06:33.731 06:02:26 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.731 06:02:26 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:33.731 ************************************ 00:06:33.731 END TEST accel_dif_verify 00:06:33.731 ************************************ 00:06:33.731 06:02:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.731 06:02:26 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:33.731 06:02:26 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:33.731 06:02:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.731 06:02:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.731 ************************************ 00:06:33.731 START TEST accel_dif_generate 00:06:33.731 ************************************ 00:06:33.731 06:02:26 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:33.731 06:02:26 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:33.731 [2024-07-23 06:02:26.905573] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:33.731 [2024-07-23 06:02:26.905663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614667 ] 00:06:33.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.731 [2024-07-23 06:02:26.938385] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.731 [2024-07-23 06:02:26.968195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.731 [2024-07-23 06:02:27.061168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.992 06:02:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:35.389 06:02:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.389 00:06:35.389 real 0m1.408s 00:06:35.389 user 0m1.264s 00:06:35.389 sys 0m0.149s 00:06:35.389 06:02:28 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.389 06:02:28 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:35.389 ************************************ 00:06:35.389 END TEST accel_dif_generate 00:06:35.389 ************************************ 00:06:35.389 06:02:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.389 06:02:28 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:35.389 06:02:28 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:35.389 06:02:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.389 06:02:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.389 ************************************ 00:06:35.389 START TEST accel_dif_generate_copy 00:06:35.389 ************************************ 00:06:35.389 06:02:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:35.389 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:35.389 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:35.390 [2024-07-23 06:02:28.355570] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:35.390 [2024-07-23 06:02:28.355646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614824 ] 00:06:35.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.390 [2024-07-23 06:02:28.387782] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.390 [2024-07-23 06:02:28.417388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.390 [2024-07-23 06:02:28.511081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.390 06:02:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.771 00:06:36.771 real 0m1.406s 00:06:36.771 user 0m1.252s 00:06:36.771 sys 0m0.156s 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.771 06:02:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.771 ************************************ 00:06:36.771 END TEST accel_dif_generate_copy 00:06:36.771 ************************************ 00:06:36.771 06:02:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.771 06:02:29 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:36.771 06:02:29 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.771 06:02:29 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:36.771 06:02:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.771 06:02:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.771 ************************************ 00:06:36.771 START TEST accel_comp 00:06:36.771 ************************************ 00:06:36.771 06:02:29 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:36.771 06:02:29 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:36.771 [2024-07-23 06:02:29.810049] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:36.771 [2024-07-23 06:02:29.810109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615064 ] 00:06:36.771 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.771 [2024-07-23 06:02:29.841934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.771 [2024-07-23 06:02:29.873743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.771 [2024-07-23 06:02:29.966436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.771 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.772 06:02:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.772 06:02:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.772 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.772 06:02:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:38.155 06:02:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.155 00:06:38.155 real 0m1.412s 00:06:38.155 user 0m1.260s 00:06:38.155 sys 0m0.156s 00:06:38.155 06:02:31 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.155 06:02:31 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:38.155 ************************************ 00:06:38.155 END TEST accel_comp 00:06:38.155 ************************************ 00:06:38.155 06:02:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.155 06:02:31 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.155 06:02:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:38.155 06:02:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.155 06:02:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.155 ************************************ 00:06:38.155 START TEST accel_decomp 00:06:38.155 ************************************ 00:06:38.155 06:02:31 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.155 06:02:31 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:38.156 [2024-07-23 06:02:31.262392] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:38.156 [2024-07-23 06:02:31.262455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615252 ] 00:06:38.156 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.156 [2024-07-23 06:02:31.295056] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.156 [2024-07-23 06:02:31.321442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.156 [2024-07-23 06:02:31.410169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.156 06:02:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.539 06:02:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.539 00:06:39.539 real 0m1.397s 00:06:39.539 user 0m1.263s 00:06:39.539 sys 0m0.139s 00:06:39.539 06:02:32 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.539 06:02:32 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:39.539 ************************************ 00:06:39.539 END TEST accel_decomp 00:06:39.539 ************************************ 00:06:39.539 06:02:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.539 06:02:32 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.539 06:02:32 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:39.539 06:02:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.539 06:02:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.539 ************************************ 00:06:39.539 START TEST accel_decomp_full 00:06:39.539 ************************************ 00:06:39.539 06:02:32 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:39.539 06:02:32 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:39.539 [2024-07-23 06:02:32.707719] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:39.539 [2024-07-23 06:02:32.707775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615412 ] 00:06:39.539 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.539 [2024-07-23 06:02:32.739973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.539 [2024-07-23 06:02:32.769411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.539 [2024-07-23 06:02:32.862314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.799 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.800 06:02:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.180 06:02:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.180 06:02:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.180 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.180 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.180 06:02:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.180 06:02:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.180 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.181 06:02:34 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.181 00:06:41.181 real 0m1.422s 00:06:41.181 user 0m1.276s 00:06:41.181 sys 0m0.149s 00:06:41.181 06:02:34 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.181 06:02:34 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:41.181 ************************************ 00:06:41.181 END TEST accel_decomp_full 00:06:41.181 ************************************ 00:06:41.181 06:02:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.181 06:02:34 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:41.181 06:02:34 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:41.181 06:02:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.181 06:02:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.181 ************************************ 00:06:41.181 START TEST accel_decomp_mcore 00:06:41.181 ************************************ 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:41.181 [2024-07-23 06:02:34.173524] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:41.181 [2024-07-23 06:02:34.173583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615563 ] 00:06:41.181 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.181 [2024-07-23 06:02:34.206116] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.181 [2024-07-23 06:02:34.236556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.181 [2024-07-23 06:02:34.331087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.181 [2024-07-23 06:02:34.331166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.181 [2024-07-23 06:02:34.331254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.181 [2024-07-23 06:02:34.331256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.181 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.182 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.182 06:02:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.557 00:06:42.557 real 0m1.409s 00:06:42.557 user 0m4.703s 00:06:42.557 sys 0m0.142s 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.557 06:02:35 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:42.557 ************************************ 00:06:42.557 END TEST accel_decomp_mcore 00:06:42.557 ************************************ 00:06:42.557 06:02:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.557 06:02:35 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.557 06:02:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:42.557 06:02:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.557 06:02:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.557 ************************************ 00:06:42.557 START TEST accel_decomp_full_mcore 00:06:42.557 ************************************ 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:42.557 [2024-07-23 06:02:35.630843] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:42.557 [2024-07-23 06:02:35.630905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615838 ] 00:06:42.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.557 [2024-07-23 06:02:35.663500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.557 [2024-07-23 06:02:35.692539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.557 [2024-07-23 06:02:35.788337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.557 [2024-07-23 06:02:35.788404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.557 [2024-07-23 06:02:35.788493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.557 [2024-07-23 06:02:35.788495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.557 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.558 06:02:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.971 00:06:43.971 real 0m1.417s 00:06:43.971 user 0m4.722s 00:06:43.971 sys 0m0.158s 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.971 06:02:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:43.971 ************************************ 00:06:43.971 END TEST accel_decomp_full_mcore 00:06:43.971 ************************************ 00:06:43.971 06:02:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.971 06:02:37 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.971 06:02:37 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:43.971 06:02:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.971 06:02:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.971 ************************************ 00:06:43.971 START TEST accel_decomp_mthread 00:06:43.971 ************************************ 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:43.971 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:43.971 [2024-07-23 06:02:37.095052] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:43.971 [2024-07-23 06:02:37.095118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616004 ] 00:06:43.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.971 [2024-07-23 06:02:37.126588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.971 [2024-07-23 06:02:37.158393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.971 [2024-07-23 06:02:37.254195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.233 06:02:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.172 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.172 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.172 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.172 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.172 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.172 06:02:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.172 00:06:45.172 real 0m1.420s 00:06:45.172 user 0m1.274s 00:06:45.172 sys 0m0.150s 00:06:45.172 06:02:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.172 06:02:38 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:45.172 ************************************ 00:06:45.172 END TEST accel_decomp_mthread 00:06:45.172 ************************************ 00:06:45.432 06:02:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.432 06:02:38 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.432 06:02:38 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:45.432 06:02:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.432 06:02:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.432 ************************************ 00:06:45.432 START TEST accel_decomp_full_mthread 00:06:45.432 ************************************ 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:45.432 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:45.432 [2024-07-23 06:02:38.569255] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:45.432 [2024-07-23 06:02:38.569321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616162 ] 00:06:45.432 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.432 [2024-07-23 06:02:38.600733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.432 [2024-07-23 06:02:38.632708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.432 [2024-07-23 06:02:38.726390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.705 06:02:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.089 06:02:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.089 00:06:47.089 real 0m1.450s 00:06:47.089 user 0m1.304s 00:06:47.089 sys 0m0.150s 00:06:47.090 06:02:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.090 06:02:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:47.090 ************************************ 00:06:47.090 END TEST accel_decomp_full_mthread 00:06:47.090 ************************************ 00:06:47.090 06:02:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.090 06:02:40 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:47.090 06:02:40 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:47.090 06:02:40 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:47.090 06:02:40 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:47.090 06:02:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.090 06:02:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.090 06:02:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.090 06:02:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.090 06:02:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.090 06:02:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.090 06:02:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.090 06:02:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:47.090 06:02:40 accel -- accel/accel.sh@41 -- # jq -r . 00:06:47.090 ************************************ 00:06:47.090 START TEST accel_dif_functional_tests 00:06:47.090 ************************************ 00:06:47.090 06:02:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:47.090 [2024-07-23 06:02:40.084424] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:47.090 [2024-07-23 06:02:40.084505] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616431 ] 00:06:47.090 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.090 [2024-07-23 06:02:40.115232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.090 [2024-07-23 06:02:40.147070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.090 [2024-07-23 06:02:40.243055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.090 [2024-07-23 06:02:40.243111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.090 [2024-07-23 06:02:40.243128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.090 00:06:47.090 00:06:47.090 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.090 http://cunit.sourceforge.net/ 00:06:47.090 00:06:47.090 00:06:47.090 Suite: accel_dif 00:06:47.090 Test: verify: DIF generated, GUARD check ...passed 00:06:47.090 Test: verify: DIF generated, APPTAG check ...passed 00:06:47.090 Test: verify: DIF generated, REFTAG check ...passed 00:06:47.090 Test: verify: DIF not generated, GUARD check ...[2024-07-23 06:02:40.337023] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:47.090 passed 00:06:47.090 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 06:02:40.337086] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:47.090 passed 00:06:47.090 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 06:02:40.337118] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:47.090 passed 00:06:47.090 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:47.090 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 06:02:40.337176] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:47.090 passed 00:06:47.090 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:47.090 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:47.090 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:47.090 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 06:02:40.337301] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:47.090 passed 00:06:47.090 Test: verify copy: DIF generated, GUARD check ...passed 00:06:47.090 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:47.090 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:47.090 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 06:02:40.337454] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:47.090 passed 00:06:47.090 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 06:02:40.337488] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:47.090 passed 00:06:47.090 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 06:02:40.337520] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:47.090 passed 00:06:47.090 Test: generate copy: DIF generated, GUARD check ...passed 00:06:47.090 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:47.090 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:47.090 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:47.090 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:47.090 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:47.090 Test: generate copy: iovecs-len validate ...[2024-07-23 06:02:40.337765] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:47.090 passed 00:06:47.090 Test: generate copy: buffer alignment validate ...passed 00:06:47.090 00:06:47.090 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.090 suites 1 1 n/a 0 0 00:06:47.090 tests 26 26 26 0 0 00:06:47.090 asserts 115 115 115 0 n/a 00:06:47.090 00:06:47.090 Elapsed time = 0.002 seconds 00:06:47.348 00:06:47.348 real 0m0.494s 00:06:47.348 user 0m0.756s 00:06:47.348 sys 0m0.179s 00:06:47.348 06:02:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.348 06:02:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:47.348 ************************************ 00:06:47.348 END TEST accel_dif_functional_tests 00:06:47.348 ************************************ 00:06:47.348 06:02:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.348 00:06:47.348 real 0m31.723s 00:06:47.348 user 0m35.112s 00:06:47.348 sys 0m4.600s 00:06:47.348 06:02:40 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.348 06:02:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.348 ************************************ 00:06:47.348 END TEST accel 00:06:47.348 ************************************ 00:06:47.348 06:02:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.348 06:02:40 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:47.348 06:02:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.348 06:02:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.348 06:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:47.348 ************************************ 00:06:47.348 START TEST accel_rpc 00:06:47.348 ************************************ 00:06:47.348 06:02:40 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:47.348 * Looking for test storage... 00:06:47.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:47.348 06:02:40 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.348 06:02:40 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1616508 00:06:47.348 06:02:40 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:47.348 06:02:40 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1616508 00:06:47.348 06:02:40 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1616508 ']' 00:06:47.348 06:02:40 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.348 06:02:40 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.348 06:02:40 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.348 06:02:40 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.348 06:02:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.608 [2024-07-23 06:02:40.694417] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:47.608 [2024-07-23 06:02:40.694508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616508 ] 00:06:47.608 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.608 [2024-07-23 06:02:40.725423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.608 [2024-07-23 06:02:40.753315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.608 [2024-07-23 06:02:40.842570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.608 06:02:40 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.608 06:02:40 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:47.608 06:02:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:47.608 06:02:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:47.608 06:02:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:47.608 06:02:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:47.608 06:02:40 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:47.608 06:02:40 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.608 06:02:40 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.608 06:02:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.608 ************************************ 00:06:47.608 START TEST accel_assign_opcode 00:06:47.608 ************************************ 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.608 [2024-07-23 06:02:40.931263] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.608 [2024-07-23 06:02:40.939273] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.608 06:02:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.866 06:02:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.866 06:02:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:47.866 06:02:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.866 06:02:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.866 06:02:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:47.866 06:02:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:47.866 06:02:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.125 software 00:06:48.125 00:06:48.125 real 0m0.293s 00:06:48.125 user 0m0.039s 00:06:48.125 sys 0m0.008s 00:06:48.125 06:02:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.125 06:02:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.125 ************************************ 00:06:48.125 END TEST accel_assign_opcode 00:06:48.125 ************************************ 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:48.125 06:02:41 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1616508 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1616508 ']' 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1616508 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1616508 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1616508' 00:06:48.125 killing process with pid 1616508 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@967 -- # kill 1616508 00:06:48.125 06:02:41 accel_rpc -- common/autotest_common.sh@972 -- # wait 1616508 00:06:48.384 00:06:48.384 real 0m1.068s 00:06:48.384 user 0m1.012s 00:06:48.384 sys 0m0.415s 00:06:48.384 06:02:41 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.384 06:02:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.384 ************************************ 00:06:48.384 END TEST accel_rpc 00:06:48.384 ************************************ 00:06:48.384 06:02:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.384 06:02:41 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:48.384 06:02:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.384 06:02:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.384 06:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:48.384 ************************************ 00:06:48.384 START TEST app_cmdline 00:06:48.384 ************************************ 00:06:48.384 06:02:41 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:48.645 * Looking for test storage... 00:06:48.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:48.645 06:02:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:48.645 06:02:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1616714 00:06:48.645 06:02:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:48.645 06:02:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1616714 00:06:48.645 06:02:41 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1616714 ']' 00:06:48.645 06:02:41 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.645 06:02:41 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.645 06:02:41 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.645 06:02:41 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.645 06:02:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.645 [2024-07-23 06:02:41.816107] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:48.645 [2024-07-23 06:02:41.816191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616714 ] 00:06:48.645 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.645 [2024-07-23 06:02:41.847467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.645 [2024-07-23 06:02:41.873124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.645 [2024-07-23 06:02:41.956446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.904 06:02:42 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.904 06:02:42 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:48.904 06:02:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:49.167 { 00:06:49.167 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:06:49.167 "fields": { 00:06:49.167 "major": 24, 00:06:49.167 "minor": 9, 00:06:49.167 "patch": 0, 00:06:49.167 "suffix": "-pre", 00:06:49.167 "commit": "f7b31b2b9" 00:06:49.167 } 00:06:49.167 } 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:49.167 06:02:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:49.167 06:02:42 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.429 request: 00:06:49.429 { 00:06:49.429 "method": "env_dpdk_get_mem_stats", 00:06:49.429 "req_id": 1 00:06:49.429 } 00:06:49.429 Got JSON-RPC error response 00:06:49.429 response: 00:06:49.429 { 00:06:49.429 "code": -32601, 00:06:49.429 "message": "Method not found" 00:06:49.429 } 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.429 06:02:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1616714 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1616714 ']' 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1616714 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.429 06:02:42 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1616714 00:06:49.689 06:02:42 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.689 06:02:42 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.689 06:02:42 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1616714' 00:06:49.689 killing process with pid 1616714 00:06:49.689 06:02:42 app_cmdline -- common/autotest_common.sh@967 -- # kill 1616714 00:06:49.689 06:02:42 app_cmdline -- common/autotest_common.sh@972 -- # wait 1616714 00:06:49.949 00:06:49.949 real 0m1.470s 00:06:49.949 user 0m1.782s 00:06:49.949 sys 0m0.453s 00:06:49.949 06:02:43 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.949 06:02:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.949 ************************************ 00:06:49.949 END TEST app_cmdline 00:06:49.949 ************************************ 00:06:49.949 06:02:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.949 06:02:43 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:49.949 06:02:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.949 06:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.949 06:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:49.949 ************************************ 00:06:49.949 START TEST version 00:06:49.949 ************************************ 00:06:49.949 06:02:43 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:49.949 * Looking for test storage... 00:06:49.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:49.949 06:02:43 version -- app/version.sh@17 -- # get_header_version major 00:06:49.949 06:02:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.949 06:02:43 version -- app/version.sh@14 -- # cut -f2 00:06:49.949 06:02:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.949 06:02:43 version -- app/version.sh@17 -- # major=24 00:06:49.949 06:02:43 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.949 06:02:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.949 06:02:43 version -- app/version.sh@14 -- # cut -f2 00:06:49.949 06:02:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.208 06:02:43 version -- app/version.sh@18 -- # minor=9 00:06:50.208 06:02:43 version -- app/version.sh@19 -- # get_header_version patch 00:06:50.208 06:02:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.208 06:02:43 version -- app/version.sh@14 -- # cut -f2 00:06:50.208 06:02:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.208 06:02:43 version -- app/version.sh@19 -- # patch=0 00:06:50.208 06:02:43 version -- app/version.sh@20 -- # get_header_version suffix 00:06:50.208 06:02:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.208 06:02:43 version -- app/version.sh@14 -- # cut -f2 00:06:50.208 06:02:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.208 06:02:43 version -- app/version.sh@20 -- # suffix=-pre 00:06:50.208 06:02:43 version -- app/version.sh@22 -- # version=24.9 00:06:50.208 06:02:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:50.208 06:02:43 version -- app/version.sh@28 -- # version=24.9rc0 00:06:50.208 06:02:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:50.208 06:02:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:50.208 06:02:43 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:50.208 06:02:43 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:50.208 00:06:50.208 real 0m0.110s 00:06:50.208 user 0m0.063s 00:06:50.208 sys 0m0.069s 00:06:50.208 06:02:43 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.208 06:02:43 version -- common/autotest_common.sh@10 -- # set +x 00:06:50.208 ************************************ 00:06:50.208 END TEST version 00:06:50.208 ************************************ 00:06:50.208 06:02:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.208 06:02:43 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:50.208 06:02:43 -- spdk/autotest.sh@198 -- # uname -s 00:06:50.208 06:02:43 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:50.208 06:02:43 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:50.208 06:02:43 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:50.208 06:02:43 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:50.208 06:02:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:50.208 06:02:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:50.208 06:02:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.208 06:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:50.208 06:02:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:50.208 06:02:43 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:50.208 06:02:43 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:50.208 06:02:43 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:50.208 06:02:43 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:50.208 06:02:43 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:50.208 06:02:43 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:50.208 06:02:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.208 06:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.208 06:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:50.208 ************************************ 00:06:50.208 START TEST nvmf_tcp 00:06:50.208 ************************************ 00:06:50.208 06:02:43 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:50.208 * Looking for test storage... 00:06:50.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:50.208 06:02:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:50.208 06:02:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:50.208 06:02:43 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:50.208 06:02:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.208 06:02:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.208 06:02:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.208 ************************************ 00:06:50.208 START TEST nvmf_target_core 00:06:50.208 ************************************ 00:06:50.208 06:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:50.208 * Looking for test storage... 00:06:50.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:50.208 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:50.208 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:50.208 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.208 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:50.208 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:50.209 06:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.468 ************************************ 00:06:50.468 START TEST nvmf_abort 00:06:50.468 ************************************ 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:50.468 * Looking for test storage... 00:06:50.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:50.468 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:52.372 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:52.372 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:52.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:52.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:52.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:52.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:06:52.373 00:06:52.373 --- 10.0.0.2 ping statistics --- 00:06:52.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.373 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:06:52.373 00:06:52.373 --- 10.0.0.1 ping statistics --- 00:06:52.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.373 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1618752 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1618752 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1618752 ']' 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.373 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.633 [2024-07-23 06:02:45.747592] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:52.633 [2024-07-23 06:02:45.747674] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.633 [2024-07-23 06:02:45.782841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.633 [2024-07-23 06:02:45.811818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.633 [2024-07-23 06:02:45.903765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.633 [2024-07-23 06:02:45.903830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.633 [2024-07-23 06:02:45.903854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.633 [2024-07-23 06:02:45.903868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.633 [2024-07-23 06:02:45.903880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.633 [2024-07-23 06:02:45.903977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.633 [2024-07-23 06:02:45.904090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.633 [2024-07-23 06:02:45.904092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.895 [2024-07-23 06:02:46.048722] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.895 Malloc0 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.895 Delay0 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.895 [2024-07-23 06:02:46.123996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.895 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:52.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.895 [2024-07-23 06:02:46.219934] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:55.449 Initializing NVMe Controllers 00:06:55.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:55.449 controller IO queue size 128 less than required 00:06:55.449 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:55.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:55.449 Initialization complete. Launching workers. 00:06:55.449 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 31622 00:06:55.449 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31687, failed to submit 62 00:06:55.449 success 31626, unsuccess 61, failed 0 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.449 rmmod nvme_tcp 00:06:55.449 rmmod nvme_fabrics 00:06:55.449 rmmod nvme_keyring 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1618752 ']' 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1618752 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1618752 ']' 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1618752 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618752 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618752' 00:06:55.449 killing process with pid 1618752 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1618752 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1618752 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:55.449 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.450 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:55.450 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.450 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.450 06:02:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:57.358 00:06:57.358 real 0m7.069s 00:06:57.358 user 0m10.158s 00:06:57.358 sys 0m2.487s 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:57.358 ************************************ 00:06:57.358 END TEST nvmf_abort 00:06:57.358 ************************************ 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.358 ************************************ 00:06:57.358 START TEST nvmf_ns_hotplug_stress 00:06:57.358 ************************************ 00:06:57.358 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:57.619 * Looking for test storage... 00:06:57.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.619 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.620 06:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:59.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:59.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:59.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:59.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:59.539 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:59.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:06:59.540 00:06:59.540 --- 10.0.0.2 ping statistics --- 00:06:59.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.540 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:59.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:06:59.540 00:06:59.540 --- 10.0.0.1 ping statistics --- 00:06:59.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.540 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:59.540 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1620975 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1620975 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1620975 ']' 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.801 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:59.801 [2024-07-23 06:02:52.930398] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:59.801 [2024-07-23 06:02:52.930462] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.801 [2024-07-23 06:02:52.967167] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.801 [2024-07-23 06:02:52.995414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.801 [2024-07-23 06:02:53.086943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.801 [2024-07-23 06:02:53.087001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.801 [2024-07-23 06:02:53.087029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.801 [2024-07-23 06:02:53.087051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.801 [2024-07-23 06:02:53.087064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.801 [2024-07-23 06:02:53.087161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.801 [2024-07-23 06:02:53.087278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.801 [2024-07-23 06:02:53.087281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.063 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.063 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:00.063 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:00.063 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:00.063 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.063 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.063 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:00.063 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:00.338 [2024-07-23 06:02:53.464088] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.338 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:00.602 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.859 [2024-07-23 06:02:53.986182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.859 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.128 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:01.386 Malloc0 00:07:01.386 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:01.644 Delay0 00:07:01.644 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.903 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:01.903 NULL1 00:07:01.903 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:02.161 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1621280 00:07:02.161 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:02.161 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:02.161 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.420 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.420 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.679 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:02.679 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:02.937 true 00:07:02.937 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:02.937 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.195 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.453 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:03.453 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:03.711 true 00:07:03.711 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:03.711 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.647 Read completed with error (sct=0, sc=11) 00:07:04.647 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.905 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:04.905 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:05.165 true 00:07:05.165 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:05.165 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.428 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.685 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:05.685 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:05.942 true 00:07:05.942 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:05.942 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.875 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.133 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:07.133 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:07.409 true 00:07:07.409 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:07.409 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.665 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.923 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:07.923 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:08.203 true 00:07:08.203 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:08.203 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.142 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.400 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:09.400 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:09.658 true 00:07:09.658 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:09.658 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.957 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.957 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:09.957 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:10.234 true 00:07:10.234 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:10.234 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.498 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.755 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:10.755 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:11.014 true 00:07:11.014 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:11.014 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.390 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.390 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:12.390 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:12.648 true 00:07:12.648 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:12.648 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.906 06:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.164 06:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:13.164 06:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:13.422 true 00:07:13.422 06:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:13.422 06:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.356 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.614 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:14.614 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:14.873 true 00:07:14.873 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:14.873 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.131 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.393 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:15.394 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:15.654 true 00:07:15.654 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:15.654 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.587 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.845 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:16.845 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:17.104 true 00:07:17.104 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:17.104 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.362 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.620 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:17.620 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:17.879 true 00:07:17.879 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:17.879 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.815 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.815 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:18.815 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:19.072 true 00:07:19.072 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:19.072 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.331 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.588 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:19.588 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:19.845 true 00:07:19.845 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:19.845 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.782 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.038 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:21.039 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:21.296 true 00:07:21.296 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:21.296 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.554 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.812 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:21.812 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:22.070 true 00:07:22.070 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:22.070 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.019 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.278 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:23.278 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:23.278 true 00:07:23.278 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:23.278 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.536 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.795 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:23.795 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:24.064 true 00:07:24.064 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:24.064 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.001 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.259 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:25.259 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:25.517 true 00:07:25.517 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:25.517 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.775 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.033 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:26.033 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:26.291 true 00:07:26.291 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:26.291 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.229 06:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.488 06:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:27.488 06:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:27.745 true 00:07:27.745 06:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:27.746 06:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.004 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.262 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:28.263 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:28.521 true 00:07:28.521 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:28.521 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.459 06:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.718 06:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:29.718 06:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:29.718 true 00:07:29.718 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:29.718 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.976 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.232 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:30.232 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:30.490 true 00:07:30.490 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:30.490 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.429 06:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.686 06:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:31.686 06:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:31.944 true 00:07:31.944 06:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:31.944 06:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.202 06:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.485 06:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:32.485 06:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:32.743 true 00:07:32.743 06:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:32.743 06:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.681 Initializing NVMe Controllers 00:07:33.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.681 Controller IO queue size 128, less than required. 00:07:33.681 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.681 Controller IO queue size 128, less than required. 00:07:33.681 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:33.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:33.681 Initialization complete. Launching workers. 00:07:33.681 ======================================================== 00:07:33.681 Latency(us) 00:07:33.681 Device Information : IOPS MiB/s Average min max 00:07:33.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 781.77 0.38 85129.81 2365.81 1037077.45 00:07:33.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10342.56 5.05 12339.69 2381.16 360963.99 00:07:33.681 ======================================================== 00:07:33.681 Total : 11124.33 5.43 17455.04 2365.81 1037077.45 00:07:33.681 00:07:33.681 06:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.941 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:33.941 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:33.941 true 00:07:34.200 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1621280 00:07:34.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1621280) - No such process 00:07:34.200 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1621280 00:07:34.200 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.200 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.458 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:34.458 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:34.458 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:34.458 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.458 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:34.716 null0 00:07:34.716 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.716 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.716 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:34.975 null1 00:07:34.975 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.975 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.975 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:35.235 null2 00:07:35.235 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.235 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.235 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:35.500 null3 00:07:35.500 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.500 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.500 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:35.758 null4 00:07:35.758 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.758 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.758 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:36.017 null5 00:07:36.017 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.017 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.017 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:36.275 null6 00:07:36.275 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.275 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.275 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:36.534 null7 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1626088 1626089 1626091 1626093 1626095 1626097 1626099 1626101 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.534 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.793 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.793 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.793 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.793 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.793 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.793 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.793 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.793 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.052 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.311 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.311 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.311 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.311 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.311 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.311 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.311 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.311 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.573 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.834 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.834 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.834 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.834 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.834 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.834 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.834 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.834 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.093 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.351 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.352 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.352 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.352 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.352 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.612 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.612 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.612 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.871 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.130 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.130 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.130 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.130 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.130 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.130 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.130 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.130 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.389 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.647 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.647 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.648 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.648 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.648 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.648 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.648 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.648 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.906 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.172 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.172 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.172 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.172 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.172 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.172 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.173 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.173 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.437 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.695 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.695 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.695 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.695 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.695 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.695 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.695 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.695 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.954 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.212 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.212 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.212 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.212 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.212 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.212 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.212 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.212 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.471 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.729 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.729 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.729 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.729 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.729 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.729 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.729 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.729 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.988 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.988 rmmod nvme_tcp 00:07:42.246 rmmod nvme_fabrics 00:07:42.246 rmmod nvme_keyring 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1620975 ']' 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1620975 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1620975 ']' 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1620975 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1620975 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1620975' 00:07:42.246 killing process with pid 1620975 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1620975 00:07:42.246 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1620975 00:07:42.506 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.506 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.506 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.506 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.506 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.506 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.506 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.506 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.414 00:07:44.414 real 0m46.992s 00:07:44.414 user 3m34.703s 00:07:44.414 sys 0m16.565s 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 ************************************ 00:07:44.414 END TEST nvmf_ns_hotplug_stress 00:07:44.414 ************************************ 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 ************************************ 00:07:44.414 START TEST nvmf_delete_subsystem 00:07:44.414 ************************************ 00:07:44.414 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:44.674 * Looking for test storage... 00:07:44.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.674 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:44.675 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:46.576 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:46.577 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:46.577 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:46.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:46.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:07:46.577 00:07:46.577 --- 10.0.0.2 ping statistics --- 00:07:46.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.577 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:07:46.577 00:07:46.577 --- 10.0.0.1 ping statistics --- 00:07:46.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.577 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1628850 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1628850 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1628850 ']' 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.577 06:03:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.835 [2024-07-23 06:03:39.950962] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:46.835 [2024-07-23 06:03:39.951059] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.835 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.835 [2024-07-23 06:03:39.989934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.835 [2024-07-23 06:03:40.016963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.835 [2024-07-23 06:03:40.107842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.835 [2024-07-23 06:03:40.107910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.835 [2024-07-23 06:03:40.107924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.835 [2024-07-23 06:03:40.107935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.835 [2024-07-23 06:03:40.107945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.835 [2024-07-23 06:03:40.108075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.836 [2024-07-23 06:03:40.108080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.094 [2024-07-23 06:03:40.240556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.094 [2024-07-23 06:03:40.256769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.094 NULL1 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.094 Delay0 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1628882 00:07:47.094 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:47.095 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:47.095 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.095 [2024-07-23 06:03:40.331492] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:48.996 06:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.996 06:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.996 06:03:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 [2024-07-23 06:03:42.436000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9b300 is same with the state(5) to be set 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 starting I/O failed: -6 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 [2024-07-23 06:03:42.437093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a6400d330 is same with the state(5) to be set 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Write completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.254 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Write completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:49.255 Read completed with error (sct=0, sc=8) 00:07:50.190 [2024-07-23 06:03:43.390327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db2b40 is same with the state(5) to be set 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 [2024-07-23 06:03:43.439158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a6400d000 is same with the state(5) to be set 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 [2024-07-23 06:03:43.440329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a6400d660 is same with the state(5) to be set 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 [2024-07-23 06:03:43.440670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d94d40 is same with the state(5) to be set 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 Write completed with error (sct=0, sc=8) 00:07:50.190 Read completed with error (sct=0, sc=8) 00:07:50.190 [2024-07-23 06:03:43.440912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95100 is same with the state(5) to be set 00:07:50.190 Initializing NVMe Controllers 00:07:50.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:50.190 Controller IO queue size 128, less than required. 00:07:50.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:50.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:50.190 Initialization complete. Launching workers. 00:07:50.190 ======================================================== 00:07:50.190 Latency(us) 00:07:50.190 Device Information : IOPS MiB/s Average min max 00:07:50.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.73 0.08 893485.41 454.54 1043939.42 00:07:50.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.32 0.08 959926.80 324.95 2003934.61 00:07:50.190 ======================================================== 00:07:50.190 Total : 332.05 0.16 925563.99 324.95 2003934.61 00:07:50.190 00:07:50.190 [2024-07-23 06:03:43.441840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db2b40 (9): Bad file descriptor 00:07:50.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:50.190 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.190 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:50.190 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1628882 00:07:50.190 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1628882 00:07:50.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1628882) - No such process 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1628882 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1628882 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1628882 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.756 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.757 [2024-07-23 06:03:43.962678] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1629406 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1629406 00:07:50.757 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:50.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.757 [2024-07-23 06:03:44.026681] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:51.321 06:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:51.321 06:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1629406 00:07:51.321 06:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:51.886 06:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:51.886 06:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1629406 00:07:51.886 06:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:52.144 06:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.144 06:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1629406 00:07:52.144 06:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:52.709 06:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.709 06:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1629406 00:07:52.709 06:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:53.274 06:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:53.274 06:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1629406 00:07:53.274 06:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:53.838 06:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:53.838 06:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1629406 00:07:53.838 06:03:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.096 Initializing NVMe Controllers 00:07:54.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:54.096 Controller IO queue size 128, less than required. 00:07:54.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:54.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:54.096 Initialization complete. Launching workers. 00:07:54.096 ======================================================== 00:07:54.096 Latency(us) 00:07:54.096 Device Information : IOPS MiB/s Average min max 00:07:54.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003645.97 1000234.48 1041750.80 00:07:54.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005021.86 1000316.85 1013678.82 00:07:54.096 ======================================================== 00:07:54.096 Total : 256.00 0.12 1004333.92 1000234.48 1041750.80 00:07:54.096 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1629406 00:07:54.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1629406) - No such process 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1629406 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.355 rmmod nvme_tcp 00:07:54.355 rmmod nvme_fabrics 00:07:54.355 rmmod nvme_keyring 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1628850 ']' 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1628850 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1628850 ']' 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1628850 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1628850 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1628850' 00:07:54.355 killing process with pid 1628850 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1628850 00:07:54.355 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1628850 00:07:54.613 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.613 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.613 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.613 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.613 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.613 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.613 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.613 06:03:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:57.148 00:07:57.148 real 0m12.139s 00:07:57.148 user 0m27.684s 00:07:57.148 sys 0m2.884s 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.148 ************************************ 00:07:57.148 END TEST nvmf_delete_subsystem 00:07:57.148 ************************************ 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.148 ************************************ 00:07:57.148 START TEST nvmf_host_management 00:07:57.148 ************************************ 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:57.148 * Looking for test storage... 00:07:57.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.148 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.149 06:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:59.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:59.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:59.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:59.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.065 06:03:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.065 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:07:59.066 00:07:59.066 --- 10.0.0.2 ping statistics --- 00:07:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.066 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:07:59.066 00:07:59.066 --- 10.0.0.1 ping statistics --- 00:07:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.066 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1631749 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1631749 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1631749 ']' 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.066 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.066 [2024-07-23 06:03:52.218546] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:59.066 [2024-07-23 06:03:52.218649] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.066 [2024-07-23 06:03:52.257267] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.066 [2024-07-23 06:03:52.283826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.066 [2024-07-23 06:03:52.373799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.066 [2024-07-23 06:03:52.373861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.066 [2024-07-23 06:03:52.373890] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.066 [2024-07-23 06:03:52.373901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.066 [2024-07-23 06:03:52.373911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.066 [2024-07-23 06:03:52.374042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.066 [2024-07-23 06:03:52.374075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.066 [2024-07-23 06:03:52.374131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:59.066 [2024-07-23 06:03:52.374133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.324 [2024-07-23 06:03:52.531132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.324 Malloc0 00:07:59.324 [2024-07-23 06:03:52.592093] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1631790 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1631790 /var/tmp/bdevperf.sock 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1631790 ']' 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.324 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:59.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:59.325 { 00:07:59.325 "params": { 00:07:59.325 "name": "Nvme$subsystem", 00:07:59.325 "trtype": "$TEST_TRANSPORT", 00:07:59.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.325 "adrfam": "ipv4", 00:07:59.325 "trsvcid": "$NVMF_PORT", 00:07:59.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.325 "hdgst": ${hdgst:-false}, 00:07:59.325 "ddgst": ${ddgst:-false} 00:07:59.325 }, 00:07:59.325 "method": "bdev_nvme_attach_controller" 00:07:59.325 } 00:07:59.325 EOF 00:07:59.325 )") 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:59.325 06:03:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:59.325 "params": { 00:07:59.325 "name": "Nvme0", 00:07:59.325 "trtype": "tcp", 00:07:59.325 "traddr": "10.0.0.2", 00:07:59.325 "adrfam": "ipv4", 00:07:59.325 "trsvcid": "4420", 00:07:59.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:59.325 "hdgst": false, 00:07:59.325 "ddgst": false 00:07:59.325 }, 00:07:59.325 "method": "bdev_nvme_attach_controller" 00:07:59.325 }' 00:07:59.583 [2024-07-23 06:03:52.673127] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:59.583 [2024-07-23 06:03:52.673202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631790 ] 00:07:59.583 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.583 [2024-07-23 06:03:52.705210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.583 [2024-07-23 06:03:52.734397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.583 [2024-07-23 06:03:52.821247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.840 Running I/O for 10 seconds... 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:07:59.840 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=386 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 386 -ge 100 ']' 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.104 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.104 [2024-07-23 06:03:53.386702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.386991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.387004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.387016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.387029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.387051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.387064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.387076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.104 [2024-07-23 06:03:53.387088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.387537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313ae0 is same with the state(5) to be set 00:08:00.105 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.105 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:00.105 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.105 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.105 [2024-07-23 06:03:53.394661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.105 [2024-07-23 06:03:53.394712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.394730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.105 [2024-07-23 06:03:53.394744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.394758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.105 [2024-07-23 06:03:53.394771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.394784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:00.105 [2024-07-23 06:03:53.394797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.394816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250bb50 is same with the state(5) to be set 00:08:00.105 [2024-07-23 06:03:53.394896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.394929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.394952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.394968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.394990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.105 [2024-07-23 06:03:53.395453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.105 [2024-07-23 06:03:53.395468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.395967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.395989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.106 [2024-07-23 06:03:53.396443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.106 [2024-07-23 06:03:53.396460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.107 [2024-07-23 06:03:53.396894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.107 [2024-07-23 06:03:53.396991] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x293db00 was disconnected and freed. reset controller. 00:08:00.107 [2024-07-23 06:03:53.398149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:00.107 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.107 06:03:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:00.107 task offset: 52352 on job bdev=Nvme0n1 fails 00:08:00.107 00:08:00.107 Latency(us) 00:08:00.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.107 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:00.107 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:00.107 Verification LBA range: start 0x0 length 0x400 00:08:00.107 Nvme0n1 : 0.40 1024.39 64.02 160.30 0.00 52578.75 2694.26 45826.65 00:08:00.107 =================================================================================================================== 00:08:00.107 Total : 1024.39 64.02 160.30 0.00 52578.75 2694.26 45826.65 00:08:00.107 [2024-07-23 06:03:53.400178] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.107 [2024-07-23 06:03:53.400222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250bb50 (9): Bad file descriptor 00:08:00.107 [2024-07-23 06:03:53.408757] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1631790 00:08:01.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1631790) - No such process 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:01.485 { 00:08:01.485 "params": { 00:08:01.485 "name": "Nvme$subsystem", 00:08:01.485 "trtype": "$TEST_TRANSPORT", 00:08:01.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:01.485 "adrfam": "ipv4", 00:08:01.485 "trsvcid": "$NVMF_PORT", 00:08:01.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:01.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:01.485 "hdgst": ${hdgst:-false}, 00:08:01.485 "ddgst": ${ddgst:-false} 00:08:01.485 }, 00:08:01.485 "method": "bdev_nvme_attach_controller" 00:08:01.485 } 00:08:01.485 EOF 00:08:01.485 )") 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:01.485 06:03:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:01.485 "params": { 00:08:01.485 "name": "Nvme0", 00:08:01.485 "trtype": "tcp", 00:08:01.485 "traddr": "10.0.0.2", 00:08:01.485 "adrfam": "ipv4", 00:08:01.485 "trsvcid": "4420", 00:08:01.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:01.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:01.485 "hdgst": false, 00:08:01.485 "ddgst": false 00:08:01.485 }, 00:08:01.485 "method": "bdev_nvme_attach_controller" 00:08:01.485 }' 00:08:01.485 [2024-07-23 06:03:54.443499] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:01.485 [2024-07-23 06:03:54.443574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632071 ] 00:08:01.485 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.485 [2024-07-23 06:03:54.475401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.485 [2024-07-23 06:03:54.504458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.485 [2024-07-23 06:03:54.593870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.485 Running I/O for 1 seconds... 00:08:02.864 00:08:02.864 Latency(us) 00:08:02.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.864 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:02.864 Verification LBA range: start 0x0 length 0x400 00:08:02.864 Nvme0n1 : 1.04 1166.56 72.91 0.00 0.00 54108.08 12621.75 46020.84 00:08:02.864 =================================================================================================================== 00:08:02.864 Total : 1166.56 72.91 0.00 0.00 54108.08 12621.75 46020.84 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.865 rmmod nvme_tcp 00:08:02.865 rmmod nvme_fabrics 00:08:02.865 rmmod nvme_keyring 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1631749 ']' 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1631749 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1631749 ']' 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1631749 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631749 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631749' 00:08:02.865 killing process with pid 1631749 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1631749 00:08:02.865 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1631749 00:08:03.124 [2024-07-23 06:03:56.359657] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:03.124 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.124 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.124 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.124 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.124 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.124 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.124 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.124 06:03:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:05.660 00:08:05.660 real 0m8.511s 00:08:05.660 user 0m18.928s 00:08:05.660 sys 0m2.613s 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 ************************************ 00:08:05.660 END TEST nvmf_host_management 00:08:05.660 ************************************ 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 ************************************ 00:08:05.660 START TEST nvmf_lvol 00:08:05.660 ************************************ 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:05.660 * Looking for test storage... 00:08:05.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.660 06:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:07.575 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:07.575 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:07.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:07.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:07.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:08:07.576 00:08:07.576 --- 10.0.0.2 ping statistics --- 00:08:07.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.576 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:08:07.576 00:08:07.576 --- 10.0.0.1 ping statistics --- 00:08:07.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.576 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1634149 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1634149 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1634149 ']' 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.576 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.576 [2024-07-23 06:04:00.658827] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:07.576 [2024-07-23 06:04:00.658923] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.576 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.576 [2024-07-23 06:04:00.696865] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:07.576 [2024-07-23 06:04:00.724579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.577 [2024-07-23 06:04:00.811941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.577 [2024-07-23 06:04:00.811993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.577 [2024-07-23 06:04:00.812022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.577 [2024-07-23 06:04:00.812034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.577 [2024-07-23 06:04:00.812044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.577 [2024-07-23 06:04:00.813108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.577 [2024-07-23 06:04:00.813284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.577 [2024-07-23 06:04:00.813287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.834 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.834 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:07.834 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.834 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.834 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.834 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.834 06:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:07.834 [2024-07-23 06:04:01.175789] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.092 06:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:08.349 06:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:08.349 06:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:08.607 06:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:08.607 06:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:08.864 06:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:09.123 06:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=22319693-63f6-4eda-9965-659b1e931830 00:08:09.123 06:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 22319693-63f6-4eda-9965-659b1e931830 lvol 20 00:08:09.381 06:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a9405cd8-9e13-4ca7-8a37-87bc3cffa125 00:08:09.381 06:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.639 06:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9405cd8-9e13-4ca7-8a37-87bc3cffa125 00:08:09.639 06:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.897 [2024-07-23 06:04:03.212758] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.897 06:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.154 06:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1634574 00:08:10.154 06:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:10.154 06:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:10.411 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.345 06:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a9405cd8-9e13-4ca7-8a37-87bc3cffa125 MY_SNAPSHOT 00:08:11.603 06:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9120fb4d-614f-417a-82e5-76d870c83b5e 00:08:11.603 06:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a9405cd8-9e13-4ca7-8a37-87bc3cffa125 30 00:08:11.861 06:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9120fb4d-614f-417a-82e5-76d870c83b5e MY_CLONE 00:08:12.119 06:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d1d5e2f5-446e-4b9c-8c32-d6ac12b486ab 00:08:12.119 06:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d1d5e2f5-446e-4b9c-8c32-d6ac12b486ab 00:08:12.684 06:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1634574 00:08:20.790 Initializing NVMe Controllers 00:08:20.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:20.790 Controller IO queue size 128, less than required. 00:08:20.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:20.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:20.790 Initialization complete. Launching workers. 00:08:20.790 ======================================================== 00:08:20.790 Latency(us) 00:08:20.790 Device Information : IOPS MiB/s Average min max 00:08:20.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9516.62 37.17 13454.83 2179.13 134123.43 00:08:20.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10718.84 41.87 11945.05 2189.93 58475.86 00:08:20.790 ======================================================== 00:08:20.790 Total : 20235.46 79.04 12655.09 2179.13 134123.43 00:08:20.790 00:08:20.790 06:04:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.790 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9405cd8-9e13-4ca7-8a37-87bc3cffa125 00:08:21.047 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22319693-63f6-4eda-9965-659b1e931830 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.306 rmmod nvme_tcp 00:08:21.306 rmmod nvme_fabrics 00:08:21.306 rmmod nvme_keyring 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1634149 ']' 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1634149 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1634149 ']' 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1634149 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1634149 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1634149' 00:08:21.306 killing process with pid 1634149 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1634149 00:08:21.306 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1634149 00:08:21.573 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.573 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.573 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.573 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.573 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.573 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.573 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.573 06:04:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.106 00:08:24.106 real 0m18.458s 00:08:24.106 user 1m2.553s 00:08:24.106 sys 0m5.888s 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 ************************************ 00:08:24.106 END TEST nvmf_lvol 00:08:24.106 ************************************ 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 ************************************ 00:08:24.106 START TEST nvmf_lvs_grow 00:08:24.106 ************************************ 00:08:24.106 06:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:24.106 * Looking for test storage... 00:08:24.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.106 06:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:26.007 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:26.007 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:26.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:26.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.007 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:26.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:26.008 00:08:26.008 --- 10.0.0.2 ping statistics --- 00:08:26.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.008 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:08:26.008 00:08:26.008 --- 10.0.0.1 ping statistics --- 00:08:26.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.008 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1637838 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1637838 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1637838 ']' 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.008 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.008 [2024-07-23 06:04:19.253636] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:26.008 [2024-07-23 06:04:19.253717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.008 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.008 [2024-07-23 06:04:19.290340] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.008 [2024-07-23 06:04:19.322023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.266 [2024-07-23 06:04:19.414863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.266 [2024-07-23 06:04:19.414931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.266 [2024-07-23 06:04:19.414948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.266 [2024-07-23 06:04:19.414961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.266 [2024-07-23 06:04:19.414983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.266 [2024-07-23 06:04:19.415012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.266 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.266 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:26.266 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.266 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.266 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.266 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.266 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:26.524 [2024-07-23 06:04:19.777299] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.524 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:26.524 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.524 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.524 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.524 ************************************ 00:08:26.525 START TEST lvs_grow_clean 00:08:26.525 ************************************ 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.525 06:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.783 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:26.783 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:27.041 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:27.041 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:27.041 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:27.298 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:27.298 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:27.298 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b11d7acf-5b81-4402-9f87-d2d72213f08c lvol 150 00:08:27.556 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4c82ad95-4080-4523-97f5-e50d1c2f9315 00:08:27.556 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.556 06:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.814 [2024-07-23 06:04:21.083785] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.814 [2024-07-23 06:04:21.083876] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.814 true 00:08:27.814 06:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:27.814 06:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:28.072 06:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:28.072 06:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.330 06:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4c82ad95-4080-4523-97f5-e50d1c2f9315 00:08:28.588 06:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.846 [2024-07-23 06:04:22.062824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.846 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1638244 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1638244 /var/tmp/bdevperf.sock 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1638244 ']' 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.104 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:29.104 [2024-07-23 06:04:22.373582] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:29.105 [2024-07-23 06:04:22.373689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638244 ] 00:08:29.105 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.105 [2024-07-23 06:04:22.406464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:29.105 [2024-07-23 06:04:22.435196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.363 [2024-07-23 06:04:22.526031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.363 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.363 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:29.363 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.621 Nvme0n1 00:08:29.878 06:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:29.878 [ 00:08:29.878 { 00:08:29.878 "name": "Nvme0n1", 00:08:29.878 "aliases": [ 00:08:29.878 "4c82ad95-4080-4523-97f5-e50d1c2f9315" 00:08:29.878 ], 00:08:29.878 "product_name": "NVMe disk", 00:08:29.878 "block_size": 4096, 00:08:29.878 "num_blocks": 38912, 00:08:29.878 "uuid": "4c82ad95-4080-4523-97f5-e50d1c2f9315", 00:08:29.878 "assigned_rate_limits": { 00:08:29.878 "rw_ios_per_sec": 0, 00:08:29.878 "rw_mbytes_per_sec": 0, 00:08:29.879 "r_mbytes_per_sec": 0, 00:08:29.879 "w_mbytes_per_sec": 0 00:08:29.879 }, 00:08:29.879 "claimed": false, 00:08:29.879 "zoned": false, 00:08:29.879 "supported_io_types": { 00:08:29.879 "read": true, 00:08:29.879 "write": true, 00:08:29.879 "unmap": true, 00:08:29.879 "flush": true, 00:08:29.879 "reset": true, 00:08:29.879 "nvme_admin": true, 00:08:29.879 "nvme_io": true, 00:08:29.879 "nvme_io_md": false, 00:08:29.879 "write_zeroes": true, 00:08:29.879 "zcopy": false, 00:08:29.879 "get_zone_info": false, 00:08:29.879 "zone_management": false, 00:08:29.879 "zone_append": false, 00:08:29.879 "compare": true, 00:08:29.879 "compare_and_write": true, 00:08:29.879 "abort": true, 00:08:29.879 "seek_hole": false, 00:08:29.879 "seek_data": false, 00:08:29.879 "copy": true, 00:08:29.879 "nvme_iov_md": false 00:08:29.879 }, 00:08:29.879 "memory_domains": [ 00:08:29.879 { 00:08:29.879 "dma_device_id": "system", 00:08:29.879 "dma_device_type": 1 00:08:29.879 } 00:08:29.879 ], 00:08:29.879 "driver_specific": { 00:08:29.879 "nvme": [ 00:08:29.879 { 00:08:29.879 "trid": { 00:08:29.879 "trtype": "TCP", 00:08:29.879 "adrfam": "IPv4", 00:08:29.879 "traddr": "10.0.0.2", 00:08:29.879 "trsvcid": "4420", 00:08:29.879 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:29.879 }, 00:08:29.879 "ctrlr_data": { 00:08:29.879 "cntlid": 1, 00:08:29.879 "vendor_id": "0x8086", 00:08:29.879 "model_number": "SPDK bdev Controller", 00:08:29.879 "serial_number": "SPDK0", 00:08:29.879 "firmware_revision": "24.09", 00:08:29.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.879 "oacs": { 00:08:29.879 "security": 0, 00:08:29.879 "format": 0, 00:08:29.879 "firmware": 0, 00:08:29.879 "ns_manage": 0 00:08:29.879 }, 00:08:29.879 "multi_ctrlr": true, 00:08:29.879 "ana_reporting": false 00:08:29.879 }, 00:08:29.879 "vs": { 00:08:29.879 "nvme_version": "1.3" 00:08:29.879 }, 00:08:29.879 "ns_data": { 00:08:29.879 "id": 1, 00:08:29.879 "can_share": true 00:08:29.879 } 00:08:29.879 } 00:08:29.879 ], 00:08:29.879 "mp_policy": "active_passive" 00:08:29.879 } 00:08:29.879 } 00:08:29.879 ] 00:08:30.154 06:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1638293 00:08:30.154 06:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:30.154 06:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:30.154 Running I/O for 10 seconds... 00:08:31.088 Latency(us) 00:08:31.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.088 Nvme0n1 : 1.00 14023.00 54.78 0.00 0.00 0.00 0.00 0.00 00:08:31.088 =================================================================================================================== 00:08:31.088 Total : 14023.00 54.78 0.00 0.00 0.00 0.00 0.00 00:08:31.088 00:08:32.021 06:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:32.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.021 Nvme0n1 : 2.00 14244.00 55.64 0.00 0.00 0.00 0.00 0.00 00:08:32.021 =================================================================================================================== 00:08:32.021 Total : 14244.00 55.64 0.00 0.00 0.00 0.00 0.00 00:08:32.021 00:08:32.279 true 00:08:32.279 06:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:32.279 06:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:32.537 06:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:32.537 06:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:32.537 06:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1638293 00:08:33.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.103 Nvme0n1 : 3.00 14365.67 56.12 0.00 0.00 0.00 0.00 0.00 00:08:33.103 =================================================================================================================== 00:08:33.103 Total : 14365.67 56.12 0.00 0.00 0.00 0.00 0.00 00:08:33.103 00:08:34.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.037 Nvme0n1 : 4.00 14465.75 56.51 0.00 0.00 0.00 0.00 0.00 00:08:34.037 =================================================================================================================== 00:08:34.037 Total : 14465.75 56.51 0.00 0.00 0.00 0.00 0.00 00:08:34.037 00:08:35.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.413 Nvme0n1 : 5.00 14503.80 56.66 0.00 0.00 0.00 0.00 0.00 00:08:35.413 =================================================================================================================== 00:08:35.413 Total : 14503.80 56.66 0.00 0.00 0.00 0.00 0.00 00:08:35.413 00:08:36.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.347 Nvme0n1 : 6.00 14550.50 56.84 0.00 0.00 0.00 0.00 0.00 00:08:36.347 =================================================================================================================== 00:08:36.347 Total : 14550.50 56.84 0.00 0.00 0.00 0.00 0.00 00:08:36.347 00:08:37.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.281 Nvme0n1 : 7.00 14574.71 56.93 0.00 0.00 0.00 0.00 0.00 00:08:37.281 =================================================================================================================== 00:08:37.281 Total : 14574.71 56.93 0.00 0.00 0.00 0.00 0.00 00:08:37.281 00:08:38.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.214 Nvme0n1 : 8.00 14600.62 57.03 0.00 0.00 0.00 0.00 0.00 00:08:38.214 =================================================================================================================== 00:08:38.214 Total : 14600.62 57.03 0.00 0.00 0.00 0.00 0.00 00:08:38.214 00:08:39.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.149 Nvme0n1 : 9.00 14621.00 57.11 0.00 0.00 0.00 0.00 0.00 00:08:39.149 =================================================================================================================== 00:08:39.149 Total : 14621.00 57.11 0.00 0.00 0.00 0.00 0.00 00:08:39.149 00:08:40.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.083 Nvme0n1 : 10.00 14643.70 57.20 0.00 0.00 0.00 0.00 0.00 00:08:40.083 =================================================================================================================== 00:08:40.083 Total : 14643.70 57.20 0.00 0.00 0.00 0.00 0.00 00:08:40.083 00:08:40.083 00:08:40.083 Latency(us) 00:08:40.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.083 Nvme0n1 : 10.01 14646.12 57.21 0.00 0.00 8733.60 5194.33 18835.53 00:08:40.083 =================================================================================================================== 00:08:40.083 Total : 14646.12 57.21 0.00 0.00 8733.60 5194.33 18835.53 00:08:40.083 0 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1638244 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1638244 ']' 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1638244 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1638244 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1638244' 00:08:40.083 killing process with pid 1638244 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1638244 00:08:40.083 Received shutdown signal, test time was about 10.000000 seconds 00:08:40.083 00:08:40.083 Latency(us) 00:08:40.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.083 =================================================================================================================== 00:08:40.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:40.083 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1638244 00:08:40.341 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.598 06:04:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.855 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:40.855 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:41.113 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:41.113 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:41.113 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.371 [2024-07-23 06:04:34.632334] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:41.371 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:41.629 request: 00:08:41.629 { 00:08:41.629 "uuid": "b11d7acf-5b81-4402-9f87-d2d72213f08c", 00:08:41.629 "method": "bdev_lvol_get_lvstores", 00:08:41.629 "req_id": 1 00:08:41.629 } 00:08:41.629 Got JSON-RPC error response 00:08:41.629 response: 00:08:41.629 { 00:08:41.629 "code": -19, 00:08:41.629 "message": "No such device" 00:08:41.629 } 00:08:41.629 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:41.629 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:41.629 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:41.629 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:41.629 06:04:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.887 aio_bdev 00:08:41.887 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4c82ad95-4080-4523-97f5-e50d1c2f9315 00:08:41.887 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=4c82ad95-4080-4523-97f5-e50d1c2f9315 00:08:41.887 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:41.887 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:41.887 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:41.887 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:41.887 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.144 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4c82ad95-4080-4523-97f5-e50d1c2f9315 -t 2000 00:08:42.402 [ 00:08:42.402 { 00:08:42.402 "name": "4c82ad95-4080-4523-97f5-e50d1c2f9315", 00:08:42.402 "aliases": [ 00:08:42.402 "lvs/lvol" 00:08:42.402 ], 00:08:42.402 "product_name": "Logical Volume", 00:08:42.402 "block_size": 4096, 00:08:42.402 "num_blocks": 38912, 00:08:42.402 "uuid": "4c82ad95-4080-4523-97f5-e50d1c2f9315", 00:08:42.402 "assigned_rate_limits": { 00:08:42.402 "rw_ios_per_sec": 0, 00:08:42.402 "rw_mbytes_per_sec": 0, 00:08:42.402 "r_mbytes_per_sec": 0, 00:08:42.402 "w_mbytes_per_sec": 0 00:08:42.402 }, 00:08:42.402 "claimed": false, 00:08:42.402 "zoned": false, 00:08:42.402 "supported_io_types": { 00:08:42.403 "read": true, 00:08:42.403 "write": true, 00:08:42.403 "unmap": true, 00:08:42.403 "flush": false, 00:08:42.403 "reset": true, 00:08:42.403 "nvme_admin": false, 00:08:42.403 "nvme_io": false, 00:08:42.403 "nvme_io_md": false, 00:08:42.403 "write_zeroes": true, 00:08:42.403 "zcopy": false, 00:08:42.403 "get_zone_info": false, 00:08:42.403 "zone_management": false, 00:08:42.403 "zone_append": false, 00:08:42.403 "compare": false, 00:08:42.403 "compare_and_write": false, 00:08:42.403 "abort": false, 00:08:42.403 "seek_hole": true, 00:08:42.403 "seek_data": true, 00:08:42.403 "copy": false, 00:08:42.403 "nvme_iov_md": false 00:08:42.403 }, 00:08:42.403 "driver_specific": { 00:08:42.403 "lvol": { 00:08:42.403 "lvol_store_uuid": "b11d7acf-5b81-4402-9f87-d2d72213f08c", 00:08:42.403 "base_bdev": "aio_bdev", 00:08:42.403 "thin_provision": false, 00:08:42.403 "num_allocated_clusters": 38, 00:08:42.403 "snapshot": false, 00:08:42.403 "clone": false, 00:08:42.403 "esnap_clone": false 00:08:42.403 } 00:08:42.403 } 00:08:42.403 } 00:08:42.403 ] 00:08:42.403 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:42.403 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:42.403 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:42.661 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:42.661 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:42.661 06:04:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:42.919 06:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:42.919 06:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4c82ad95-4080-4523-97f5-e50d1c2f9315 00:08:43.177 06:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b11d7acf-5b81-4402-9f87-d2d72213f08c 00:08:43.435 06:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.695 06:04:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.695 00:08:43.695 real 0m17.182s 00:08:43.695 user 0m16.371s 00:08:43.695 sys 0m1.983s 00:08:43.695 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.695 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:43.695 ************************************ 00:08:43.695 END TEST lvs_grow_clean 00:08:43.695 ************************************ 00:08:43.695 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:43.695 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:43.695 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.695 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.695 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.953 ************************************ 00:08:43.953 START TEST lvs_grow_dirty 00:08:43.953 ************************************ 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.953 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.211 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:44.211 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:44.469 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0fdeec84-5412-4e61-82ed-b21062328712 00:08:44.469 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:08:44.469 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:44.727 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:44.727 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:44.727 06:04:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0fdeec84-5412-4e61-82ed-b21062328712 lvol 150 00:08:44.985 06:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=183e045c-8c92-43b6-80de-e66c20eae51b 00:08:44.985 06:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.985 06:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:45.243 [2024-07-23 06:04:38.385969] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:45.243 [2024-07-23 06:04:38.386068] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:45.243 true 00:08:45.243 06:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:08:45.243 06:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:45.501 06:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:45.501 06:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.758 06:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 183e045c-8c92-43b6-80de-e66c20eae51b 00:08:46.016 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:46.274 [2024-07-23 06:04:39.485263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.274 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.532 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1640340 00:08:46.532 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:46.532 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:46.532 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1640340 /var/tmp/bdevperf.sock 00:08:46.532 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1640340 ']' 00:08:46.532 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:46.533 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.533 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:46.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:46.533 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.533 06:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.533 [2024-07-23 06:04:39.793729] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:46.533 [2024-07-23 06:04:39.793816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640340 ] 00:08:46.533 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.533 [2024-07-23 06:04:39.827130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.533 [2024-07-23 06:04:39.857822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.790 [2024-07-23 06:04:39.949252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.790 06:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.790 06:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:46.790 06:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:47.356 Nvme0n1 00:08:47.356 06:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:47.616 [ 00:08:47.616 { 00:08:47.616 "name": "Nvme0n1", 00:08:47.616 "aliases": [ 00:08:47.616 "183e045c-8c92-43b6-80de-e66c20eae51b" 00:08:47.616 ], 00:08:47.616 "product_name": "NVMe disk", 00:08:47.616 "block_size": 4096, 00:08:47.616 "num_blocks": 38912, 00:08:47.616 "uuid": "183e045c-8c92-43b6-80de-e66c20eae51b", 00:08:47.616 "assigned_rate_limits": { 00:08:47.616 "rw_ios_per_sec": 0, 00:08:47.616 "rw_mbytes_per_sec": 0, 00:08:47.616 "r_mbytes_per_sec": 0, 00:08:47.616 "w_mbytes_per_sec": 0 00:08:47.616 }, 00:08:47.616 "claimed": false, 00:08:47.616 "zoned": false, 00:08:47.616 "supported_io_types": { 00:08:47.616 "read": true, 00:08:47.616 "write": true, 00:08:47.616 "unmap": true, 00:08:47.616 "flush": true, 00:08:47.616 "reset": true, 00:08:47.616 "nvme_admin": true, 00:08:47.616 "nvme_io": true, 00:08:47.616 "nvme_io_md": false, 00:08:47.616 "write_zeroes": true, 00:08:47.616 "zcopy": false, 00:08:47.616 "get_zone_info": false, 00:08:47.616 "zone_management": false, 00:08:47.616 "zone_append": false, 00:08:47.616 "compare": true, 00:08:47.616 "compare_and_write": true, 00:08:47.616 "abort": true, 00:08:47.616 "seek_hole": false, 00:08:47.616 "seek_data": false, 00:08:47.616 "copy": true, 00:08:47.616 "nvme_iov_md": false 00:08:47.616 }, 00:08:47.616 "memory_domains": [ 00:08:47.616 { 00:08:47.616 "dma_device_id": "system", 00:08:47.616 "dma_device_type": 1 00:08:47.616 } 00:08:47.616 ], 00:08:47.616 "driver_specific": { 00:08:47.616 "nvme": [ 00:08:47.616 { 00:08:47.616 "trid": { 00:08:47.616 "trtype": "TCP", 00:08:47.616 "adrfam": "IPv4", 00:08:47.616 "traddr": "10.0.0.2", 00:08:47.616 "trsvcid": "4420", 00:08:47.616 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:47.616 }, 00:08:47.616 "ctrlr_data": { 00:08:47.616 "cntlid": 1, 00:08:47.616 "vendor_id": "0x8086", 00:08:47.616 "model_number": "SPDK bdev Controller", 00:08:47.616 "serial_number": "SPDK0", 00:08:47.616 "firmware_revision": "24.09", 00:08:47.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.616 "oacs": { 00:08:47.616 "security": 0, 00:08:47.616 "format": 0, 00:08:47.616 "firmware": 0, 00:08:47.616 "ns_manage": 0 00:08:47.616 }, 00:08:47.616 "multi_ctrlr": true, 00:08:47.616 "ana_reporting": false 00:08:47.616 }, 00:08:47.616 "vs": { 00:08:47.616 "nvme_version": "1.3" 00:08:47.616 }, 00:08:47.616 "ns_data": { 00:08:47.616 "id": 1, 00:08:47.616 "can_share": true 00:08:47.616 } 00:08:47.616 } 00:08:47.616 ], 00:08:47.616 "mp_policy": "active_passive" 00:08:47.616 } 00:08:47.616 } 00:08:47.616 ] 00:08:47.616 06:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1640474 00:08:47.616 06:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:47.616 06:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:47.616 Running I/O for 10 seconds... 00:08:48.557 Latency(us) 00:08:48.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.557 Nvme0n1 : 1.00 14088.00 55.03 0.00 0.00 0.00 0.00 0.00 00:08:48.557 =================================================================================================================== 00:08:48.557 Total : 14088.00 55.03 0.00 0.00 0.00 0.00 0.00 00:08:48.557 00:08:49.489 06:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0fdeec84-5412-4e61-82ed-b21062328712 00:08:49.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.747 Nvme0n1 : 2.00 14212.00 55.52 0.00 0.00 0.00 0.00 0.00 00:08:49.747 =================================================================================================================== 00:08:49.747 Total : 14212.00 55.52 0.00 0.00 0.00 0.00 0.00 00:08:49.747 00:08:49.747 true 00:08:49.747 06:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:08:49.747 06:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:50.004 06:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:50.004 06:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:50.004 06:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1640474 00:08:50.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.570 Nvme0n1 : 3.00 14316.67 55.92 0.00 0.00 0.00 0.00 0.00 00:08:50.570 =================================================================================================================== 00:08:50.570 Total : 14316.67 55.92 0.00 0.00 0.00 0.00 0.00 00:08:50.570 00:08:51.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.945 Nvme0n1 : 4.00 14385.50 56.19 0.00 0.00 0.00 0.00 0.00 00:08:51.945 =================================================================================================================== 00:08:51.945 Total : 14385.50 56.19 0.00 0.00 0.00 0.00 0.00 00:08:51.945 00:08:52.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.879 Nvme0n1 : 5.00 14439.60 56.40 0.00 0.00 0.00 0.00 0.00 00:08:52.879 =================================================================================================================== 00:08:52.879 Total : 14439.60 56.40 0.00 0.00 0.00 0.00 0.00 00:08:52.879 00:08:53.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.812 Nvme0n1 : 6.00 14489.00 56.60 0.00 0.00 0.00 0.00 0.00 00:08:53.812 =================================================================================================================== 00:08:53.812 Total : 14489.00 56.60 0.00 0.00 0.00 0.00 0.00 00:08:53.812 00:08:54.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.745 Nvme0n1 : 7.00 14528.71 56.75 0.00 0.00 0.00 0.00 0.00 00:08:54.745 =================================================================================================================== 00:08:54.745 Total : 14528.71 56.75 0.00 0.00 0.00 0.00 0.00 00:08:54.745 00:08:55.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.678 Nvme0n1 : 8.00 14560.75 56.88 0.00 0.00 0.00 0.00 0.00 00:08:55.678 =================================================================================================================== 00:08:55.678 Total : 14560.75 56.88 0.00 0.00 0.00 0.00 0.00 00:08:55.678 00:08:56.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.612 Nvme0n1 : 9.00 14585.44 56.97 0.00 0.00 0.00 0.00 0.00 00:08:56.612 =================================================================================================================== 00:08:56.612 Total : 14585.44 56.97 0.00 0.00 0.00 0.00 0.00 00:08:56.612 00:08:57.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.987 Nvme0n1 : 10.00 14624.40 57.13 0.00 0.00 0.00 0.00 0.00 00:08:57.987 =================================================================================================================== 00:08:57.987 Total : 14624.40 57.13 0.00 0.00 0.00 0.00 0.00 00:08:57.987 00:08:57.987 00:08:57.987 Latency(us) 00:08:57.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.987 Nvme0n1 : 10.01 14623.89 57.12 0.00 0.00 8746.35 4150.61 15631.55 00:08:57.987 =================================================================================================================== 00:08:57.987 Total : 14623.89 57.12 0.00 0.00 8746.35 4150.61 15631.55 00:08:57.987 0 00:08:57.987 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1640340 00:08:57.987 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1640340 ']' 00:08:57.987 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1640340 00:08:57.987 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:57.987 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.987 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1640340 00:08:57.987 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:57.987 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:57.988 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1640340' 00:08:57.988 killing process with pid 1640340 00:08:57.988 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1640340 00:08:57.988 Received shutdown signal, test time was about 10.000000 seconds 00:08:57.988 00:08:57.988 Latency(us) 00:08:57.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.988 =================================================================================================================== 00:08:57.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:57.988 06:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1640340 00:08:57.988 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.246 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.504 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:08:58.504 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:58.762 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:58.762 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:58.762 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1637838 00:08:58.762 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1637838 00:08:58.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1637838 Killed "${NVMF_APP[@]}" "$@" 00:08:58.762 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:58.763 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:58.763 06:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1641812 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1641812 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1641812 ']' 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.763 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.763 [2024-07-23 06:04:52.052924] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:58.763 [2024-07-23 06:04:52.053021] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.763 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.763 [2024-07-23 06:04:52.092849] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.021 [2024-07-23 06:04:52.120529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.021 [2024-07-23 06:04:52.207370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.021 [2024-07-23 06:04:52.207435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.021 [2024-07-23 06:04:52.207463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.021 [2024-07-23 06:04:52.207474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.021 [2024-07-23 06:04:52.207484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.021 [2024-07-23 06:04:52.207509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.021 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.021 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:59.021 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.021 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.021 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.021 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.021 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.279 [2024-07-23 06:04:52.594876] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:59.279 [2024-07-23 06:04:52.595013] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:59.279 [2024-07-23 06:04:52.595070] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:59.279 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:59.280 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 183e045c-8c92-43b6-80de-e66c20eae51b 00:08:59.280 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=183e045c-8c92-43b6-80de-e66c20eae51b 00:08:59.280 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:59.280 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:59.280 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:59.280 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:59.280 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.845 06:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 183e045c-8c92-43b6-80de-e66c20eae51b -t 2000 00:08:59.845 [ 00:08:59.845 { 00:08:59.845 "name": "183e045c-8c92-43b6-80de-e66c20eae51b", 00:08:59.845 "aliases": [ 00:08:59.845 "lvs/lvol" 00:08:59.845 ], 00:08:59.845 "product_name": "Logical Volume", 00:08:59.845 "block_size": 4096, 00:08:59.845 "num_blocks": 38912, 00:08:59.845 "uuid": "183e045c-8c92-43b6-80de-e66c20eae51b", 00:08:59.845 "assigned_rate_limits": { 00:08:59.845 "rw_ios_per_sec": 0, 00:08:59.845 "rw_mbytes_per_sec": 0, 00:08:59.845 "r_mbytes_per_sec": 0, 00:08:59.845 "w_mbytes_per_sec": 0 00:08:59.845 }, 00:08:59.845 "claimed": false, 00:08:59.845 "zoned": false, 00:08:59.845 "supported_io_types": { 00:08:59.845 "read": true, 00:08:59.845 "write": true, 00:08:59.845 "unmap": true, 00:08:59.845 "flush": false, 00:08:59.845 "reset": true, 00:08:59.845 "nvme_admin": false, 00:08:59.845 "nvme_io": false, 00:08:59.845 "nvme_io_md": false, 00:08:59.845 "write_zeroes": true, 00:08:59.845 "zcopy": false, 00:08:59.845 "get_zone_info": false, 00:08:59.845 "zone_management": false, 00:08:59.845 "zone_append": false, 00:08:59.845 "compare": false, 00:08:59.845 "compare_and_write": false, 00:08:59.845 "abort": false, 00:08:59.845 "seek_hole": true, 00:08:59.845 "seek_data": true, 00:08:59.845 "copy": false, 00:08:59.845 "nvme_iov_md": false 00:08:59.845 }, 00:08:59.845 "driver_specific": { 00:08:59.845 "lvol": { 00:08:59.845 "lvol_store_uuid": "0fdeec84-5412-4e61-82ed-b21062328712", 00:08:59.845 "base_bdev": "aio_bdev", 00:08:59.845 "thin_provision": false, 00:08:59.845 "num_allocated_clusters": 38, 00:08:59.845 "snapshot": false, 00:08:59.845 "clone": false, 00:08:59.845 "esnap_clone": false 00:08:59.845 } 00:08:59.845 } 00:08:59.845 } 00:08:59.845 ] 00:08:59.845 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:59.845 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:09:00.103 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:00.103 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:00.103 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:09:00.103 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:00.669 [2024-07-23 06:04:53.944421] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:00.669 06:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:09:00.928 request: 00:09:00.928 { 00:09:00.928 "uuid": "0fdeec84-5412-4e61-82ed-b21062328712", 00:09:00.928 "method": "bdev_lvol_get_lvstores", 00:09:00.928 "req_id": 1 00:09:00.928 } 00:09:00.928 Got JSON-RPC error response 00:09:00.928 response: 00:09:00.928 { 00:09:00.928 "code": -19, 00:09:00.928 "message": "No such device" 00:09:00.928 } 00:09:00.928 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:00.928 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.928 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.928 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.928 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.188 aio_bdev 00:09:01.188 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 183e045c-8c92-43b6-80de-e66c20eae51b 00:09:01.188 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=183e045c-8c92-43b6-80de-e66c20eae51b 00:09:01.188 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:01.188 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:01.188 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:01.188 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:01.188 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:01.448 06:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 183e045c-8c92-43b6-80de-e66c20eae51b -t 2000 00:09:01.707 [ 00:09:01.707 { 00:09:01.707 "name": "183e045c-8c92-43b6-80de-e66c20eae51b", 00:09:01.707 "aliases": [ 00:09:01.707 "lvs/lvol" 00:09:01.707 ], 00:09:01.707 "product_name": "Logical Volume", 00:09:01.707 "block_size": 4096, 00:09:01.707 "num_blocks": 38912, 00:09:01.707 "uuid": "183e045c-8c92-43b6-80de-e66c20eae51b", 00:09:01.707 "assigned_rate_limits": { 00:09:01.707 "rw_ios_per_sec": 0, 00:09:01.707 "rw_mbytes_per_sec": 0, 00:09:01.707 "r_mbytes_per_sec": 0, 00:09:01.707 "w_mbytes_per_sec": 0 00:09:01.707 }, 00:09:01.707 "claimed": false, 00:09:01.707 "zoned": false, 00:09:01.707 "supported_io_types": { 00:09:01.707 "read": true, 00:09:01.707 "write": true, 00:09:01.707 "unmap": true, 00:09:01.707 "flush": false, 00:09:01.707 "reset": true, 00:09:01.707 "nvme_admin": false, 00:09:01.707 "nvme_io": false, 00:09:01.707 "nvme_io_md": false, 00:09:01.707 "write_zeroes": true, 00:09:01.707 "zcopy": false, 00:09:01.707 "get_zone_info": false, 00:09:01.707 "zone_management": false, 00:09:01.707 "zone_append": false, 00:09:01.707 "compare": false, 00:09:01.707 "compare_and_write": false, 00:09:01.707 "abort": false, 00:09:01.707 "seek_hole": true, 00:09:01.707 "seek_data": true, 00:09:01.707 "copy": false, 00:09:01.707 "nvme_iov_md": false 00:09:01.707 }, 00:09:01.707 "driver_specific": { 00:09:01.707 "lvol": { 00:09:01.707 "lvol_store_uuid": "0fdeec84-5412-4e61-82ed-b21062328712", 00:09:01.707 "base_bdev": "aio_bdev", 00:09:01.707 "thin_provision": false, 00:09:01.707 "num_allocated_clusters": 38, 00:09:01.707 "snapshot": false, 00:09:01.707 "clone": false, 00:09:01.707 "esnap_clone": false 00:09:01.707 } 00:09:01.707 } 00:09:01.707 } 00:09:01.707 ] 00:09:01.707 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:01.707 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:09:01.707 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:01.965 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:01.965 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdeec84-5412-4e61-82ed-b21062328712 00:09:01.965 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:02.224 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:02.224 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 183e045c-8c92-43b6-80de-e66c20eae51b 00:09:02.482 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0fdeec84-5412-4e61-82ed-b21062328712 00:09:02.740 06:04:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.999 00:09:02.999 real 0m19.213s 00:09:02.999 user 0m47.932s 00:09:02.999 sys 0m4.905s 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.999 ************************************ 00:09:02.999 END TEST lvs_grow_dirty 00:09:02.999 ************************************ 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:02.999 nvmf_trace.0 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.999 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.999 rmmod nvme_tcp 00:09:03.257 rmmod nvme_fabrics 00:09:03.257 rmmod nvme_keyring 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1641812 ']' 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1641812 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1641812 ']' 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1641812 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1641812 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1641812' 00:09:03.257 killing process with pid 1641812 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1641812 00:09:03.257 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1641812 00:09:03.517 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.517 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.517 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:03.517 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.517 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.517 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.517 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.517 06:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.423 00:09:05.423 real 0m41.708s 00:09:05.423 user 1m9.961s 00:09:05.423 sys 0m8.792s 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 ************************************ 00:09:05.423 END TEST nvmf_lvs_grow 00:09:05.423 ************************************ 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 ************************************ 00:09:05.423 START TEST nvmf_bdev_io_wait 00:09:05.423 ************************************ 00:09:05.423 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:05.682 * Looking for test storage... 00:09:05.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:05.682 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:05.683 06:04:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:07.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:07.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:07.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:07.586 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:07.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:07.587 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:07.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:09:07.846 00:09:07.846 --- 10.0.0.2 ping statistics --- 00:09:07.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.846 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:09:07.846 00:09:07.846 --- 10.0.0.1 ping statistics --- 00:09:07.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.846 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1644337 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1644337 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1644337 ']' 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.846 06:05:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.846 [2024-07-23 06:05:01.047447] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:07.846 [2024-07-23 06:05:01.047531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.846 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.846 [2024-07-23 06:05:01.095624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:07.846 [2024-07-23 06:05:01.126397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.105 [2024-07-23 06:05:01.224829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.105 [2024-07-23 06:05:01.224905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.105 [2024-07-23 06:05:01.224922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.105 [2024-07-23 06:05:01.224936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.105 [2024-07-23 06:05:01.224947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.105 [2024-07-23 06:05:01.225007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.105 [2024-07-23 06:05:01.225038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.105 [2024-07-23 06:05:01.225159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.105 [2024-07-23 06:05:01.225162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.105 [2024-07-23 06:05:01.401808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.105 Malloc0 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.105 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.364 [2024-07-23 06:05:01.463818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1644373 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1644374 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1644377 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:08.364 { 00:09:08.364 "params": { 00:09:08.364 "name": "Nvme$subsystem", 00:09:08.364 "trtype": "$TEST_TRANSPORT", 00:09:08.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.364 "adrfam": "ipv4", 00:09:08.364 "trsvcid": "$NVMF_PORT", 00:09:08.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.364 "hdgst": ${hdgst:-false}, 00:09:08.364 "ddgst": ${ddgst:-false} 00:09:08.364 }, 00:09:08.364 "method": "bdev_nvme_attach_controller" 00:09:08.364 } 00:09:08.364 EOF 00:09:08.364 )") 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:08.364 { 00:09:08.364 "params": { 00:09:08.364 "name": "Nvme$subsystem", 00:09:08.364 "trtype": "$TEST_TRANSPORT", 00:09:08.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.364 "adrfam": "ipv4", 00:09:08.364 "trsvcid": "$NVMF_PORT", 00:09:08.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.364 "hdgst": ${hdgst:-false}, 00:09:08.364 "ddgst": ${ddgst:-false} 00:09:08.364 }, 00:09:08.364 "method": "bdev_nvme_attach_controller" 00:09:08.364 } 00:09:08.364 EOF 00:09:08.364 )") 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1644379 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:08.364 { 00:09:08.364 "params": { 00:09:08.364 "name": "Nvme$subsystem", 00:09:08.364 "trtype": "$TEST_TRANSPORT", 00:09:08.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.364 "adrfam": "ipv4", 00:09:08.364 "trsvcid": "$NVMF_PORT", 00:09:08.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.364 "hdgst": ${hdgst:-false}, 00:09:08.364 "ddgst": ${ddgst:-false} 00:09:08.364 }, 00:09:08.364 "method": "bdev_nvme_attach_controller" 00:09:08.364 } 00:09:08.364 EOF 00:09:08.364 )") 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:08.364 { 00:09:08.364 "params": { 00:09:08.364 "name": "Nvme$subsystem", 00:09:08.364 "trtype": "$TEST_TRANSPORT", 00:09:08.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.364 "adrfam": "ipv4", 00:09:08.364 "trsvcid": "$NVMF_PORT", 00:09:08.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.364 "hdgst": ${hdgst:-false}, 00:09:08.364 "ddgst": ${ddgst:-false} 00:09:08.364 }, 00:09:08.364 "method": "bdev_nvme_attach_controller" 00:09:08.364 } 00:09:08.364 EOF 00:09:08.364 )") 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1644373 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:08.364 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:08.364 "params": { 00:09:08.364 "name": "Nvme1", 00:09:08.364 "trtype": "tcp", 00:09:08.365 "traddr": "10.0.0.2", 00:09:08.365 "adrfam": "ipv4", 00:09:08.365 "trsvcid": "4420", 00:09:08.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.365 "hdgst": false, 00:09:08.365 "ddgst": false 00:09:08.365 }, 00:09:08.365 "method": "bdev_nvme_attach_controller" 00:09:08.365 }' 00:09:08.365 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:08.365 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:08.365 "params": { 00:09:08.365 "name": "Nvme1", 00:09:08.365 "trtype": "tcp", 00:09:08.365 "traddr": "10.0.0.2", 00:09:08.365 "adrfam": "ipv4", 00:09:08.365 "trsvcid": "4420", 00:09:08.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.365 "hdgst": false, 00:09:08.365 "ddgst": false 00:09:08.365 }, 00:09:08.365 "method": "bdev_nvme_attach_controller" 00:09:08.365 }' 00:09:08.365 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:08.365 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:08.365 "params": { 00:09:08.365 "name": "Nvme1", 00:09:08.365 "trtype": "tcp", 00:09:08.365 "traddr": "10.0.0.2", 00:09:08.365 "adrfam": "ipv4", 00:09:08.365 "trsvcid": "4420", 00:09:08.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.365 "hdgst": false, 00:09:08.365 "ddgst": false 00:09:08.365 }, 00:09:08.365 "method": "bdev_nvme_attach_controller" 00:09:08.365 }' 00:09:08.365 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:08.365 06:05:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:08.365 "params": { 00:09:08.365 "name": "Nvme1", 00:09:08.365 "trtype": "tcp", 00:09:08.365 "traddr": "10.0.0.2", 00:09:08.365 "adrfam": "ipv4", 00:09:08.365 "trsvcid": "4420", 00:09:08.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.365 "hdgst": false, 00:09:08.365 "ddgst": false 00:09:08.365 }, 00:09:08.365 "method": "bdev_nvme_attach_controller" 00:09:08.365 }' 00:09:08.365 [2024-07-23 06:05:01.512698] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:08.365 [2024-07-23 06:05:01.512771] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:08.365 [2024-07-23 06:05:01.513507] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:08.365 [2024-07-23 06:05:01.513507] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:08.365 [2024-07-23 06:05:01.513508] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:08.365 [2024-07-23 06:05:01.513605] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 06:05:01.513606] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 06:05:01.513606] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:08.365 --proc-type=auto ] 00:09:08.365 --proc-type=auto ] 00:09:08.365 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.365 [2024-07-23 06:05:01.659802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.365 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.365 [2024-07-23 06:05:01.688753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.623 [2024-07-23 06:05:01.759892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.623 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.623 [2024-07-23 06:05:01.764032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:08.623 [2024-07-23 06:05:01.789791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.623 [2024-07-23 06:05:01.859979] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.623 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.623 [2024-07-23 06:05:01.865536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:08.623 [2024-07-23 06:05:01.890093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.623 [2024-07-23 06:05:01.934865] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.623 [2024-07-23 06:05:01.965146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.882 [2024-07-23 06:05:01.968070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:08.882 [2024-07-23 06:05:02.034136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:08.882 Running I/O for 1 seconds... 00:09:09.140 Running I/O for 1 seconds... 00:09:09.140 Running I/O for 1 seconds... 00:09:09.140 Running I/O for 1 seconds... 00:09:10.080 00:09:10.080 Latency(us) 00:09:10.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.080 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:10.080 Nvme1n1 : 1.01 11546.89 45.11 0.00 0.00 11040.72 6941.96 20291.89 00:09:10.080 =================================================================================================================== 00:09:10.080 Total : 11546.89 45.11 0.00 0.00 11040.72 6941.96 20291.89 00:09:10.080 00:09:10.080 Latency(us) 00:09:10.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.080 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:10.080 Nvme1n1 : 1.00 132101.74 516.02 0.00 0.00 965.24 344.37 1462.42 00:09:10.080 =================================================================================================================== 00:09:10.080 Total : 132101.74 516.02 0.00 0.00 965.24 344.37 1462.42 00:09:10.080 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1644374 00:09:10.080 00:09:10.080 Latency(us) 00:09:10.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.080 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:10.080 Nvme1n1 : 1.01 7754.75 30.29 0.00 0.00 16460.23 5170.06 26796.94 00:09:10.080 =================================================================================================================== 00:09:10.080 Total : 7754.75 30.29 0.00 0.00 16460.23 5170.06 26796.94 00:09:10.080 00:09:10.080 Latency(us) 00:09:10.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.080 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:10.080 Nvme1n1 : 1.01 6754.85 26.39 0.00 0.00 18838.61 5704.06 51263.72 00:09:10.080 =================================================================================================================== 00:09:10.080 Total : 6754.85 26.39 0.00 0.00 18838.61 5704.06 51263.72 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1644377 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1644379 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.649 rmmod nvme_tcp 00:09:10.649 rmmod nvme_fabrics 00:09:10.649 rmmod nvme_keyring 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1644337 ']' 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1644337 00:09:10.649 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1644337 ']' 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1644337 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1644337 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1644337' 00:09:10.650 killing process with pid 1644337 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1644337 00:09:10.650 06:05:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1644337 00:09:10.908 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:10.908 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:10.908 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:10.908 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.908 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.908 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.908 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.908 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.813 00:09:12.813 real 0m7.320s 00:09:12.813 user 0m16.163s 00:09:12.813 sys 0m3.586s 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.813 ************************************ 00:09:12.813 END TEST nvmf_bdev_io_wait 00:09:12.813 ************************************ 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.813 ************************************ 00:09:12.813 START TEST nvmf_queue_depth 00:09:12.813 ************************************ 00:09:12.813 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.072 * Looking for test storage... 00:09:13.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.072 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.073 06:05:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.984 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:14.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:14.985 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:14.985 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:14.985 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:09:14.985 00:09:14.985 --- 10.0.0.2 ping statistics --- 00:09:14.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.985 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:09:14.985 00:09:14.985 --- 10.0.0.1 ping statistics --- 00:09:14.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.985 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.985 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1646604 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1646604 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1646604 ']' 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.259 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 [2024-07-23 06:05:08.395032] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:15.259 [2024-07-23 06:05:08.395117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.259 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.259 [2024-07-23 06:05:08.434140] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:15.259 [2024-07-23 06:05:08.464840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.259 [2024-07-23 06:05:08.560668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.259 [2024-07-23 06:05:08.560738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.259 [2024-07-23 06:05:08.560754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.259 [2024-07-23 06:05:08.560776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.259 [2024-07-23 06:05:08.560788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.259 [2024-07-23 06:05:08.560819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.535 [2024-07-23 06:05:08.705396] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.535 Malloc0 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.535 [2024-07-23 06:05:08.765918] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1646633 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1646633 /var/tmp/bdevperf.sock 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1646633 ']' 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.535 06:05:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.535 [2024-07-23 06:05:08.813459] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:15.535 [2024-07-23 06:05:08.813521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646633 ] 00:09:15.535 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.535 [2024-07-23 06:05:08.846513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:15.535 [2024-07-23 06:05:08.876498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.794 [2024-07-23 06:05:08.968251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.794 06:05:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.794 06:05:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:15.794 06:05:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:15.794 06:05:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.794 06:05:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.051 NVMe0n1 00:09:16.051 06:05:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.051 06:05:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:16.051 Running I/O for 10 seconds... 00:09:28.252 00:09:28.252 Latency(us) 00:09:28.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.252 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:28.252 Verification LBA range: start 0x0 length 0x4000 00:09:28.252 NVMe0n1 : 10.09 8708.56 34.02 0.00 0.00 117061.01 27185.30 71070.15 00:09:28.252 =================================================================================================================== 00:09:28.252 Total : 8708.56 34.02 0.00 0.00 117061.01 27185.30 71070.15 00:09:28.252 0 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1646633 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1646633 ']' 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1646633 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1646633 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1646633' 00:09:28.252 killing process with pid 1646633 00:09:28.252 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1646633 00:09:28.253 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.253 00:09:28.253 Latency(us) 00:09:28.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.253 =================================================================================================================== 00:09:28.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1646633 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.253 rmmod nvme_tcp 00:09:28.253 rmmod nvme_fabrics 00:09:28.253 rmmod nvme_keyring 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1646604 ']' 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1646604 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1646604 ']' 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1646604 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1646604 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1646604' 00:09:28.253 killing process with pid 1646604 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1646604 00:09:28.253 06:05:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1646604 00:09:28.253 06:05:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.253 06:05:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.253 06:05:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.253 06:05:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.253 06:05:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.253 06:05:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.253 06:05:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.253 06:05:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.821 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.821 00:09:28.821 real 0m16.035s 00:09:28.821 user 0m22.595s 00:09:28.821 sys 0m3.071s 00:09:28.821 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.821 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.821 ************************************ 00:09:28.821 END TEST nvmf_queue_depth 00:09:28.821 ************************************ 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.080 ************************************ 00:09:29.080 START TEST nvmf_target_multipath 00:09:29.080 ************************************ 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:29.080 * Looking for test storage... 00:09:29.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.080 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.081 06:05:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.983 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:30.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:30.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:30.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:30.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.984 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:09:31.242 00:09:31.242 --- 10.0.0.2 ping statistics --- 00:09:31.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.242 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:09:31.242 00:09:31.242 --- 10.0.0.1 ping statistics --- 00:09:31.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.242 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.242 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:31.243 only one NIC for nvmf test 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.243 rmmod nvme_tcp 00:09:31.243 rmmod nvme_fabrics 00:09:31.243 rmmod nvme_keyring 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.243 06:05:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.777 00:09:33.777 real 0m4.371s 00:09:33.777 user 0m0.841s 00:09:33.777 sys 0m1.528s 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:33.777 ************************************ 00:09:33.777 END TEST nvmf_target_multipath 00:09:33.777 ************************************ 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.777 ************************************ 00:09:33.777 START TEST nvmf_zcopy 00:09:33.777 ************************************ 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:33.777 * Looking for test storage... 00:09:33.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:33.777 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.778 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:35.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:35.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:35.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:35.680 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.680 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:09:35.681 00:09:35.681 --- 10.0.0.2 ping statistics --- 00:09:35.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.681 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:09:35.681 00:09:35.681 --- 10.0.0.1 ping statistics --- 00:09:35.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.681 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1651816 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1651816 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1651816 ']' 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.681 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.681 [2024-07-23 06:05:28.891671] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:35.681 [2024-07-23 06:05:28.891742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.681 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.681 [2024-07-23 06:05:28.927115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:35.681 [2024-07-23 06:05:28.954364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.939 [2024-07-23 06:05:29.043278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.939 [2024-07-23 06:05:29.043338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.939 [2024-07-23 06:05:29.043374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.939 [2024-07-23 06:05:29.043386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.939 [2024-07-23 06:05:29.043396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.939 [2024-07-23 06:05:29.043423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.939 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.939 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:35.939 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.939 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.939 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.939 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.939 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.940 [2024-07-23 06:05:29.190757] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.940 [2024-07-23 06:05:29.206945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.940 malloc0 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:35.940 { 00:09:35.940 "params": { 00:09:35.940 "name": "Nvme$subsystem", 00:09:35.940 "trtype": "$TEST_TRANSPORT", 00:09:35.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.940 "adrfam": "ipv4", 00:09:35.940 "trsvcid": "$NVMF_PORT", 00:09:35.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.940 "hdgst": ${hdgst:-false}, 00:09:35.940 "ddgst": ${ddgst:-false} 00:09:35.940 }, 00:09:35.940 "method": "bdev_nvme_attach_controller" 00:09:35.940 } 00:09:35.940 EOF 00:09:35.940 )") 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:35.940 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:35.940 "params": { 00:09:35.940 "name": "Nvme1", 00:09:35.940 "trtype": "tcp", 00:09:35.940 "traddr": "10.0.0.2", 00:09:35.940 "adrfam": "ipv4", 00:09:35.940 "trsvcid": "4420", 00:09:35.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.940 "hdgst": false, 00:09:35.940 "ddgst": false 00:09:35.940 }, 00:09:35.940 "method": "bdev_nvme_attach_controller" 00:09:35.940 }' 00:09:36.198 [2024-07-23 06:05:29.307921] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:36.198 [2024-07-23 06:05:29.308006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651958 ] 00:09:36.198 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.198 [2024-07-23 06:05:29.346141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:36.198 [2024-07-23 06:05:29.380181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.198 [2024-07-23 06:05:29.473135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.455 Running I/O for 10 seconds... 00:09:46.426 00:09:46.426 Latency(us) 00:09:46.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.426 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:46.426 Verification LBA range: start 0x0 length 0x1000 00:09:46.426 Nvme1n1 : 10.05 5887.12 45.99 0.00 0.00 21596.83 1177.22 40972.14 00:09:46.426 =================================================================================================================== 00:09:46.426 Total : 5887.12 45.99 0.00 0.00 21596.83 1177.22 40972.14 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1653157 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:46.685 { 00:09:46.685 "params": { 00:09:46.685 "name": "Nvme$subsystem", 00:09:46.685 "trtype": "$TEST_TRANSPORT", 00:09:46.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.685 "adrfam": "ipv4", 00:09:46.685 "trsvcid": "$NVMF_PORT", 00:09:46.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.685 "hdgst": ${hdgst:-false}, 00:09:46.685 "ddgst": ${ddgst:-false} 00:09:46.685 }, 00:09:46.685 "method": "bdev_nvme_attach_controller" 00:09:46.685 } 00:09:46.685 EOF 00:09:46.685 )") 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:46.685 [2024-07-23 06:05:39.967982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.685 [2024-07-23 06:05:39.968032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:46.685 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:46.685 "params": { 00:09:46.685 "name": "Nvme1", 00:09:46.685 "trtype": "tcp", 00:09:46.685 "traddr": "10.0.0.2", 00:09:46.685 "adrfam": "ipv4", 00:09:46.685 "trsvcid": "4420", 00:09:46.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.685 "hdgst": false, 00:09:46.685 "ddgst": false 00:09:46.685 }, 00:09:46.685 "method": "bdev_nvme_attach_controller" 00:09:46.685 }' 00:09:46.685 [2024-07-23 06:05:39.975915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.685 [2024-07-23 06:05:39.975941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.685 [2024-07-23 06:05:39.983931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.685 [2024-07-23 06:05:39.983956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.685 [2024-07-23 06:05:39.991951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.685 [2024-07-23 06:05:39.991988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.685 [2024-07-23 06:05:39.999994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.685 [2024-07-23 06:05:40.000018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.685 [2024-07-23 06:05:40.006165] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:46.685 [2024-07-23 06:05:40.006244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653157 ] 00:09:46.685 [2024-07-23 06:05:40.008016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.685 [2024-07-23 06:05:40.008042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.685 [2024-07-23 06:05:40.016040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.685 [2024-07-23 06:05:40.016067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.685 [2024-07-23 06:05:40.024063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.685 [2024-07-23 06:05:40.024088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.032115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.032152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.944 [2024-07-23 06:05:40.040112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.040141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.040511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:46.944 [2024-07-23 06:05:40.048126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.048152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.056149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.056174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.064168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.064192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.072190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.072214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.073028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.944 [2024-07-23 06:05:40.080240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.080275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.088272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.088314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.096261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.096287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.104284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.104309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.112305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.112329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.120342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.120368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.128374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.128413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.136393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.136428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.144393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.144417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.152413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.152438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.160434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.160459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.168456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.168480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.168979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.944 [2024-07-23 06:05:40.176475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.176499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.184515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.184546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.192555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.192595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.200580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.200630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.208605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.208667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.216634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.216689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.224670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.224708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.232700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.232748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.240679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.240701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.248726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.248763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.256785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.256838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.944 [2024-07-23 06:05:40.264782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.944 [2024-07-23 06:05:40.264819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.945 [2024-07-23 06:05:40.272755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.945 [2024-07-23 06:05:40.272777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.945 [2024-07-23 06:05:40.280782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.945 [2024-07-23 06:05:40.280804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.289038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.289078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.297070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.297097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.305091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.305123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.313129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.313158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.321141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.321170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.329154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.329179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.337184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.337209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.345246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.345274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.353237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.353265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 Running I/O for 5 seconds... 00:09:47.203 [2024-07-23 06:05:40.361260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.361287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.375814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.375844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.387892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.387936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.399785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.399814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.412126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.412154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.424467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.424494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.436564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.436591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.448769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.448798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.460629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.460657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.472920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.472948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.485091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.485123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.497850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.497879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.510241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.510270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.522255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.522284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.203 [2024-07-23 06:05:40.534432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.203 [2024-07-23 06:05:40.534461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.546411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.546439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.558303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.558331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.570190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.570228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.582841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.582869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.595910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.595942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.608536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.608567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.621496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.621527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.633957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.633988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.646507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.646534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.659322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.659353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.671856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.671885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.684327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.684359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.697226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.697257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.709756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.709783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.721937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.721964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.734379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.734423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.746828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.746860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.759241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.759283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.771611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.771645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.784382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.784412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.462 [2024-07-23 06:05:40.796609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.462 [2024-07-23 06:05:40.796662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.808479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.808517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.820844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.820872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.833076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.833103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.845366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.845408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.857741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.857769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.869520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.869551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.882180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.882210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.894715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.894745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.907782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.907813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.920497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.920524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.933501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.933531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.946166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.946198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.959147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.959179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.971730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.971758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.984151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.984182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:40.996882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:40.996924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:41.009490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:41.009517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:41.021537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:41.021568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:41.034719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:41.034747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:41.046981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:41.047025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.720 [2024-07-23 06:05:41.059372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.720 [2024-07-23 06:05:41.059403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.071713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.071741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.084172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.084200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.096383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.096413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.109205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.109235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.122015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.122042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.134694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.134729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.147537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.147568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.160437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.160468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.173246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.173277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.186227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.186273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.198793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.198821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.211017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.211044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.223048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.223090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.235470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.235501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.247488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.247514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.259246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.259273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.271137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.271182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.283037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.283072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.296960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.296987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.307447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.307474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.979 [2024-07-23 06:05:41.320357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.979 [2024-07-23 06:05:41.320385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.332627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.332655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.344686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.344714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.356844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.356872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.369230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.369257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.383292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.383319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.394755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.394783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.406586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.406640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.419082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.419109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.430820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.430848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.442966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.442994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.454860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.454903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.467021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.467048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.478939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.478967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.491099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.491126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.502987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.503014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.515195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.515230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.527720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.527748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.539235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.539261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.551409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.551435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.563626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.563670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.238 [2024-07-23 06:05:41.575488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.238 [2024-07-23 06:05:41.575515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.587644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.587671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.599522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.599549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.611665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.611693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.624259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.624286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.636562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.636590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.648751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.648780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.660500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.660526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.672280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.672308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.684654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.684682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.696907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.696950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.708926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.708953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.721166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.721196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.733540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.733567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.745394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.745437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.757887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.757915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.769849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.769877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.782013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.782039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.794102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.794129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.805381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.805408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.817229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.817257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.496 [2024-07-23 06:05:41.828480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.496 [2024-07-23 06:05:41.828506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.840167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.840195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.851887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.851931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.863573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.863622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.877230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.877258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.888148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.888176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.900138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.900165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.912774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.912801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.925328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.925355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.937715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.937743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.950314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.950341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.962414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.962441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.974828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.974856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.986847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.986875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:41.998801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:41.998828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:42.010727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:42.010755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:42.022908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:42.022935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:42.034687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:42.034715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:42.046489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:42.046516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:42.059081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:42.059108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:42.071787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:42.071815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:42.084501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:42.084527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.757 [2024-07-23 06:05:42.097452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.757 [2024-07-23 06:05:42.097480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.109697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.109725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.122461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.122488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.134411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.134437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.146234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.146261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.158542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.158569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.170858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.170886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.182737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.182781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.195544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.195571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.207945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.207972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.219941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.219968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.232204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.232231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.244242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.244270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.256746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.256774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.269100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.269127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.281239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.281267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.293873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.293915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.306646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.306679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.321033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.321060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.332240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.332283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.344804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.344834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.015 [2024-07-23 06:05:42.357302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.015 [2024-07-23 06:05:42.357330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.369245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.369288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.381439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.381466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.393315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.393341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.405157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.405184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.416820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.416848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.428371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.428405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.440204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.440247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.451857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.451884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.463431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.463472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.475296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.475323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.486996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.487023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.499305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.499346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.511417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.511445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.523897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.523924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.273 [2024-07-23 06:05:42.536281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.273 [2024-07-23 06:05:42.536307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.274 [2024-07-23 06:05:42.548536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.274 [2024-07-23 06:05:42.548563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.274 [2024-07-23 06:05:42.560655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.274 [2024-07-23 06:05:42.560683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.274 [2024-07-23 06:05:42.573971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.274 [2024-07-23 06:05:42.573997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.274 [2024-07-23 06:05:42.585554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.274 [2024-07-23 06:05:42.585581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.274 [2024-07-23 06:05:42.597916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.274 [2024-07-23 06:05:42.597943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.274 [2024-07-23 06:05:42.610704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.274 [2024-07-23 06:05:42.610732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.622483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.622510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.634959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.634990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.646705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.646733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.658376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.658427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.671144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.671187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.683471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.683498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.695879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.695921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.708173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.708200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.719975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.720002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.733460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.733486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.743798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.743827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.756816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.756850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.769329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.769358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.782003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.782031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.794419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.794446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.807293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.807320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.819397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.819425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.831548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.831579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.843843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.843871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.855774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.855801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.532 [2024-07-23 06:05:42.868340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.532 [2024-07-23 06:05:42.868385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.880732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.880761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.892973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.893007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.905459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.905499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.917522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.917549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.929571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.929620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.941939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.941965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.953965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.953992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.966775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.966803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.978709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.978737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:42.990812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:42.990841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:43.003081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:43.003110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:43.015339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:43.015367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:43.027334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:43.027362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:43.039022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:43.039050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:43.051170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:43.051198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:43.065243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:43.065271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:43.076875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:43.076902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.790 [2024-07-23 06:05:43.088660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.790 [2024-07-23 06:05:43.088687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.791 [2024-07-23 06:05:43.099838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.791 [2024-07-23 06:05:43.099865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.791 [2024-07-23 06:05:43.111292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.791 [2024-07-23 06:05:43.111334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.791 [2024-07-23 06:05:43.122873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.791 [2024-07-23 06:05:43.122905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.134243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.134270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.145733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.145760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.157287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.157315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.168709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.168737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.180265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.180292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.191685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.191713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.203626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.203654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.215925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.215951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.227894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.227922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.239641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.239669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.253260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.253289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.263897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.263924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.275527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.275555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.287032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.287059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.299177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.299204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.311640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.311679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.323567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.323598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.335442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.335469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.347166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.347199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.359443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.359470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.371683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.056 [2024-07-23 06:05:43.371710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.056 [2024-07-23 06:05:43.383805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.057 [2024-07-23 06:05:43.383833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.057 [2024-07-23 06:05:43.395795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.057 [2024-07-23 06:05:43.395822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.318 [2024-07-23 06:05:43.408338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.318 [2024-07-23 06:05:43.408365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.318 [2024-07-23 06:05:43.420548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.318 [2024-07-23 06:05:43.420590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.318 [2024-07-23 06:05:43.433474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.318 [2024-07-23 06:05:43.433518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.318 [2024-07-23 06:05:43.446555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.318 [2024-07-23 06:05:43.446582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.318 [2024-07-23 06:05:43.458648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.318 [2024-07-23 06:05:43.458678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.318 [2024-07-23 06:05:43.471218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.318 [2024-07-23 06:05:43.471245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.318 [2024-07-23 06:05:43.483377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.318 [2024-07-23 06:05:43.483404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.318 [2024-07-23 06:05:43.495606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.495659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.507721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.507748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.519587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.519637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.531740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.531767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.543584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.543622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.555736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.555764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.567726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.567754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.580332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.580359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.592446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.592491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.604569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.604611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.616668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.616695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.628827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.628855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.641080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.641108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.319 [2024-07-23 06:05:43.653935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.319 [2024-07-23 06:05:43.653977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.665816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.665844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.677759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.677787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.690371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.690398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.703108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.703135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.715519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.715545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.727481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.727507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.739622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.739665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.752285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.752311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.765431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.765462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.777953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.777980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.790141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.790172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.802199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.802226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.814353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.814380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.826434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.826461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.838284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.838311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.850004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.850032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.861836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.576 [2024-07-23 06:05:43.861864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.576 [2024-07-23 06:05:43.873838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.577 [2024-07-23 06:05:43.873866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.577 [2024-07-23 06:05:43.885884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.577 [2024-07-23 06:05:43.885915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.577 [2024-07-23 06:05:43.897718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.577 [2024-07-23 06:05:43.897746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.577 [2024-07-23 06:05:43.909931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.577 [2024-07-23 06:05:43.909959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:43.921811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:43.921839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:43.934296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:43.934323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:43.946588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:43.946643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:43.958759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:43.958787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:43.970801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:43.970829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:43.982563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:43.982590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:43.994530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:43.994556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.005998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.006035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.018347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.018375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.030575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.030627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.042525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.042552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.054451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.054477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.066695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.066723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.078950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.078992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.091353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.091379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.104065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.104092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.115955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.115982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.127963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.127990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.140313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.140340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.152777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.834 [2024-07-23 06:05:44.152805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.834 [2024-07-23 06:05:44.165288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.835 [2024-07-23 06:05:44.165315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.177655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.177682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.190207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.190234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.202424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.202451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.214285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.214311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.226158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.226185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.238238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.238264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.249950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.249976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.262046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.262080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.273852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.273880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.286433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.286478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.298381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.298409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.310881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.310909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.322158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.322201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.334412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.334440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.346266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.346311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.358120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.358148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.372401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.372432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.383823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.383851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.395775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.395803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.408262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.408289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.420285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.420312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.092 [2024-07-23 06:05:44.434130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.092 [2024-07-23 06:05:44.434160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.445451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.445477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.458438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.458483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.471147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.471174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.483086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.483116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.495244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.495297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.507143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.507185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.519413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.519440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.531009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.531051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.543215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.543243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.555410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.555437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.567667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.567695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.579444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.579487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.591706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.591747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.603441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.603468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.615683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.615712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.627738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.627766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.639747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.639775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.651816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.651844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.663837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.663864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.675849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.675876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.350 [2024-07-23 06:05:44.687777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.350 [2024-07-23 06:05:44.687805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.699485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.699527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.711857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.711885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.724018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.724067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.736226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.736253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.748489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.748515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.760295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.760322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.772150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.772177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.784122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.784148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.796154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.796182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.808435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.808462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.820282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.820308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.832029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.832056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.844520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.844548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.856412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.856439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.868691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.868720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.880514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.880541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.892975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.893001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.905036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.905062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.916557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.916584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.928119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.928146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-07-23 06:05:44.940599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-07-23 06:05:44.940648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:44.951973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:44.952021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:44.963872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:44.963915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:44.976136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:44.976163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:44.988321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:44.988348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.002460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.002487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.014301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.014327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.026512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.026557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.038771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.038799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.050804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.050833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.062776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.062805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.076639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.076678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.087844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.087872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.100899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.100930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.113377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.113405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.125520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.125548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.137536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.137563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.149777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.149805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.162437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.162478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.174589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.174640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.186521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.186554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.867 [2024-07-23 06:05:45.198270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.867 [2024-07-23 06:05:45.198296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.210611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.210645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.222838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.222865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.234844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.234872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.246876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.246918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.259159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.259186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.271395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.271441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.283524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.283551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.295380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.295421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.307102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.307129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.319330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.319357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.331196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.331222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.344937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.344964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.356102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.356128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.367805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.367833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.379237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.379266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 00:09:52.125 Latency(us) 00:09:52.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.125 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:52.125 Nvme1n1 : 5.01 10446.55 81.61 0.00 0.00 12234.39 5170.06 25049.32 00:09:52.125 =================================================================================================================== 00:09:52.125 Total : 10446.55 81.61 0.00 0.00 12234.39 5170.06 25049.32 00:09:52.125 [2024-07-23 06:05:45.387235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.125 [2024-07-23 06:05:45.387264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.125 [2024-07-23 06:05:45.395255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.395284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.403299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.403341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.411342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.411396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.419364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.419415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.427381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.427431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.435400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.435449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.443447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.443500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.451458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.451510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.459474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.459524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.126 [2024-07-23 06:05:45.467502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.126 [2024-07-23 06:05:45.467551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.475524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.475579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.483539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.483590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.491558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.491609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.499581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.499637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.507599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.507654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.515633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.515682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.523644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.523699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.531640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.531678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.539693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.539740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.547713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.547761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.555776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.555828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.563729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.563754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.571740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.571769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.579800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.579852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.587817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.587865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.595790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.595814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.603808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.603831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 [2024-07-23 06:05:45.611828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.384 [2024-07-23 06:05:45.611850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1653157) - No such process 00:09:52.384 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1653157 00:09:52.384 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.384 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.384 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.385 delay0 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.385 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:52.385 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.643 [2024-07-23 06:05:45.770783] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:59.199 Initializing NVMe Controllers 00:09:59.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:59.199 Initialization complete. Launching workers. 00:09:59.199 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 797 00:09:59.199 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1077, failed to submit 40 00:09:59.199 success 892, unsuccess 185, failed 0 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.199 rmmod nvme_tcp 00:09:59.199 rmmod nvme_fabrics 00:09:59.199 rmmod nvme_keyring 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.199 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1651816 ']' 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1651816 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1651816 ']' 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1651816 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1651816 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1651816' 00:09:59.200 killing process with pid 1651816 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1651816 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1651816 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.200 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:01.761 00:10:01.761 real 0m27.920s 00:10:01.761 user 0m40.435s 00:10:01.761 sys 0m8.704s 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.761 ************************************ 00:10:01.761 END TEST nvmf_zcopy 00:10:01.761 ************************************ 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.761 ************************************ 00:10:01.761 START TEST nvmf_nmic 00:10:01.761 ************************************ 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:01.761 * Looking for test storage... 00:10:01.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.761 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:01.762 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:03.663 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:03.663 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:03.663 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:03.663 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.663 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:03.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:10:03.664 00:10:03.664 --- 10.0.0.2 ping statistics --- 00:10:03.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.664 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:10:03.664 00:10:03.664 --- 10.0.0.1 ping statistics --- 00:10:03.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.664 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1656545 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1656545 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1656545 ']' 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:03.664 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.664 [2024-07-23 06:05:56.937162] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:03.664 [2024-07-23 06:05:56.937258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.664 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.664 [2024-07-23 06:05:56.985863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.922 [2024-07-23 06:05:57.017270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.922 [2024-07-23 06:05:57.117568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.922 [2024-07-23 06:05:57.117641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.922 [2024-07-23 06:05:57.117659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.922 [2024-07-23 06:05:57.117673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.922 [2024-07-23 06:05:57.117685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.922 [2024-07-23 06:05:57.117774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.922 [2024-07-23 06:05:57.121638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.922 [2024-07-23 06:05:57.121677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.922 [2024-07-23 06:05:57.121682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.922 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.922 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:03.922 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:03.922 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.922 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 [2024-07-23 06:05:57.278783] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 Malloc0 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 [2024-07-23 06:05:57.330430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:04.180 test case1: single bdev can't be used in multiple subsystems 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 [2024-07-23 06:05:57.354247] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:04.180 [2024-07-23 06:05:57.354276] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:04.180 [2024-07-23 06:05:57.354291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.180 request: 00:10:04.180 { 00:10:04.180 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:04.180 "namespace": { 00:10:04.180 "bdev_name": "Malloc0", 00:10:04.180 "no_auto_visible": false 00:10:04.180 }, 00:10:04.180 "method": "nvmf_subsystem_add_ns", 00:10:04.180 "req_id": 1 00:10:04.180 } 00:10:04.180 Got JSON-RPC error response 00:10:04.180 response: 00:10:04.180 { 00:10:04.180 "code": -32602, 00:10:04.180 "message": "Invalid parameters" 00:10:04.180 } 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:04.180 Adding namespace failed - expected result. 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:04.180 test case2: host connect to nvmf target in multiple paths 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.180 [2024-07-23 06:05:57.362362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.180 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.746 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:05.311 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:05.311 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:05.311 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.311 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:05.311 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:07.838 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:07.838 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:07.838 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.838 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:07.838 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.838 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:07.838 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.838 [global] 00:10:07.838 thread=1 00:10:07.838 invalidate=1 00:10:07.838 rw=write 00:10:07.838 time_based=1 00:10:07.838 runtime=1 00:10:07.838 ioengine=libaio 00:10:07.838 direct=1 00:10:07.838 bs=4096 00:10:07.838 iodepth=1 00:10:07.838 norandommap=0 00:10:07.838 numjobs=1 00:10:07.838 00:10:07.838 verify_dump=1 00:10:07.838 verify_backlog=512 00:10:07.838 verify_state_save=0 00:10:07.838 do_verify=1 00:10:07.838 verify=crc32c-intel 00:10:07.838 [job0] 00:10:07.838 filename=/dev/nvme0n1 00:10:07.838 Could not set queue depth (nvme0n1) 00:10:07.838 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.838 fio-3.35 00:10:07.838 Starting 1 thread 00:10:08.770 00:10:08.770 job0: (groupid=0, jobs=1): err= 0: pid=1657121: Tue Jul 23 06:06:01 2024 00:10:08.770 read: IOPS=1014, BW=4059KiB/s (4157kB/s)(4112KiB/1013msec) 00:10:08.770 slat (nsec): min=5086, max=48487, avg=13328.94, stdev=7723.31 00:10:08.770 clat (usec): min=291, max=40984, avg=555.32, stdev=2526.01 00:10:08.770 lat (usec): min=298, max=40999, avg=568.65, stdev=2526.45 00:10:08.770 clat percentiles (usec): 00:10:08.770 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 326], 00:10:08.770 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 383], 60.00th=[ 404], 00:10:08.770 | 70.00th=[ 457], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 523], 00:10:08.770 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:10:08.770 | 99.99th=[41157] 00:10:08.770 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:10:08.770 slat (usec): min=6, max=40673, avg=55.76, stdev=1274.68 00:10:08.770 clat (usec): min=183, max=435, avg=217.20, stdev=28.87 00:10:08.770 lat (usec): min=192, max=40910, avg=272.96, stdev=1277.10 00:10:08.770 clat percentiles (usec): 00:10:08.770 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 198], 00:10:08.770 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:08.770 | 70.00th=[ 219], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 273], 00:10:08.770 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 408], 99.95th=[ 437], 00:10:08.770 | 99.99th=[ 437] 00:10:08.770 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:10:08.770 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:08.770 lat (usec) : 250=53.78%, 500=41.34%, 750=4.72% 00:10:08.770 lat (msec) : 50=0.16% 00:10:08.770 cpu : usr=1.68%, sys=2.87%, ctx=2569, majf=0, minf=2 00:10:08.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.770 issued rwts: total=1028,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.770 00:10:08.770 Run status group 0 (all jobs): 00:10:08.770 READ: bw=4059KiB/s (4157kB/s), 4059KiB/s-4059KiB/s (4157kB/s-4157kB/s), io=4112KiB (4211kB), run=1013-1013msec 00:10:08.770 WRITE: bw=6065KiB/s (6211kB/s), 6065KiB/s-6065KiB/s (6211kB/s-6211kB/s), io=6144KiB (6291kB), run=1013-1013msec 00:10:08.770 00:10:08.770 Disk stats (read/write): 00:10:08.770 nvme0n1: ios=1050/1536, merge=0/0, ticks=1388/322, in_queue=1710, util=99.80% 00:10:08.770 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.029 rmmod nvme_tcp 00:10:09.029 rmmod nvme_fabrics 00:10:09.029 rmmod nvme_keyring 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1656545 ']' 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1656545 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1656545 ']' 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1656545 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1656545 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1656545' 00:10:09.029 killing process with pid 1656545 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1656545 00:10:09.029 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1656545 00:10:09.287 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.287 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.287 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.287 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.287 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.287 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.287 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.287 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.814 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.814 00:10:11.814 real 0m9.943s 00:10:11.814 user 0m22.193s 00:10:11.814 sys 0m2.469s 00:10:11.814 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.815 ************************************ 00:10:11.815 END TEST nvmf_nmic 00:10:11.815 ************************************ 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.815 ************************************ 00:10:11.815 START TEST nvmf_fio_target 00:10:11.815 ************************************ 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:11.815 * Looking for test storage... 00:10:11.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.815 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.189 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:13.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.190 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:13.447 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.447 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:13.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:13.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:10:13.448 00:10:13.448 --- 10.0.0.2 ping statistics --- 00:10:13.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.448 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:10:13.448 00:10:13.448 --- 10.0.0.1 ping statistics --- 00:10:13.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.448 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1659275 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1659275 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1659275 ']' 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.448 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.448 [2024-07-23 06:06:06.735979] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:13.448 [2024-07-23 06:06:06.736073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.448 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.448 [2024-07-23 06:06:06.780520] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:13.706 [2024-07-23 06:06:06.809164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.706 [2024-07-23 06:06:06.896792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.706 [2024-07-23 06:06:06.896853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.706 [2024-07-23 06:06:06.896881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.706 [2024-07-23 06:06:06.896892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.706 [2024-07-23 06:06:06.896909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.706 [2024-07-23 06:06:06.896962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.706 [2024-07-23 06:06:06.897025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.706 [2024-07-23 06:06:06.897355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.706 [2024-07-23 06:06:06.897358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.706 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.706 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:13.706 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.706 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.706 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.963 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.963 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:14.220 [2024-07-23 06:06:07.320034] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.220 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.477 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:14.477 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.735 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:14.735 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.992 06:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:14.992 06:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.249 06:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:15.250 06:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:15.507 06:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.768 06:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:15.768 06:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.039 06:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:16.039 06:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.296 06:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:16.296 06:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:16.553 06:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.812 06:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:16.812 06:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.070 06:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:17.070 06:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:17.328 06:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.585 [2024-07-23 06:06:10.764823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.585 06:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:17.843 06:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:18.100 06:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.664 06:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:18.664 06:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.664 06:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.664 06:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:18.664 06:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:18.664 06:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:21.188 06:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:21.188 06:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:21.188 06:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.188 06:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:21.188 06:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.188 06:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:21.188 06:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:21.188 [global] 00:10:21.188 thread=1 00:10:21.188 invalidate=1 00:10:21.188 rw=write 00:10:21.188 time_based=1 00:10:21.188 runtime=1 00:10:21.188 ioengine=libaio 00:10:21.188 direct=1 00:10:21.188 bs=4096 00:10:21.188 iodepth=1 00:10:21.188 norandommap=0 00:10:21.188 numjobs=1 00:10:21.188 00:10:21.188 verify_dump=1 00:10:21.188 verify_backlog=512 00:10:21.188 verify_state_save=0 00:10:21.188 do_verify=1 00:10:21.188 verify=crc32c-intel 00:10:21.188 [job0] 00:10:21.188 filename=/dev/nvme0n1 00:10:21.188 [job1] 00:10:21.188 filename=/dev/nvme0n2 00:10:21.188 [job2] 00:10:21.188 filename=/dev/nvme0n3 00:10:21.188 [job3] 00:10:21.188 filename=/dev/nvme0n4 00:10:21.188 Could not set queue depth (nvme0n1) 00:10:21.188 Could not set queue depth (nvme0n2) 00:10:21.188 Could not set queue depth (nvme0n3) 00:10:21.188 Could not set queue depth (nvme0n4) 00:10:21.188 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.188 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.188 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.188 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.188 fio-3.35 00:10:21.188 Starting 4 threads 00:10:22.118 00:10:22.118 job0: (groupid=0, jobs=1): err= 0: pid=1660825: Tue Jul 23 06:06:15 2024 00:10:22.118 read: IOPS=773, BW=3095KiB/s (3169kB/s)(3132KiB/1012msec) 00:10:22.118 slat (nsec): min=5555, max=53968, avg=14634.11, stdev=10499.91 00:10:22.118 clat (usec): min=324, max=40984, avg=941.63, stdev=4433.12 00:10:22.118 lat (usec): min=330, max=41017, avg=956.27, stdev=4434.89 00:10:22.118 clat percentiles (usec): 00:10:22.118 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 408], 00:10:22.118 | 30.00th=[ 424], 40.00th=[ 437], 50.00th=[ 445], 60.00th=[ 449], 00:10:22.118 | 70.00th=[ 457], 80.00th=[ 465], 90.00th=[ 478], 95.00th=[ 490], 00:10:22.118 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:22.118 | 99.99th=[41157] 00:10:22.118 write: IOPS=1011, BW=4047KiB/s (4145kB/s)(4096KiB/1012msec); 0 zone resets 00:10:22.118 slat (nsec): min=6326, max=46465, avg=10662.58, stdev=6929.68 00:10:22.118 clat (usec): min=186, max=402, avg=238.92, stdev=27.64 00:10:22.118 lat (usec): min=198, max=422, avg=249.59, stdev=28.59 00:10:22.118 clat percentiles (usec): 00:10:22.118 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:10:22.118 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:10:22.118 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 285], 00:10:22.118 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 400], 99.95th=[ 404], 00:10:22.118 | 99.99th=[ 404] 00:10:22.118 bw ( KiB/s): min= 496, max= 7696, per=26.33%, avg=4096.00, stdev=5091.17, samples=2 00:10:22.118 iops : min= 124, max= 1924, avg=1024.00, stdev=1272.79, samples=2 00:10:22.118 lat (usec) : 250=40.23%, 500=58.49%, 750=0.66% 00:10:22.118 lat (msec) : 2=0.06%, 50=0.55% 00:10:22.118 cpu : usr=1.48%, sys=2.67%, ctx=1808, majf=0, minf=1 00:10:22.118 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.118 issued rwts: total=783,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.118 job1: (groupid=0, jobs=1): err= 0: pid=1660826: Tue Jul 23 06:06:15 2024 00:10:22.118 read: IOPS=681, BW=2728KiB/s (2793kB/s)(2796KiB/1025msec) 00:10:22.118 slat (nsec): min=5875, max=52525, avg=21309.83, stdev=10744.45 00:10:22.118 clat (usec): min=311, max=41158, avg=1074.65, stdev=4823.40 00:10:22.118 lat (usec): min=320, max=41171, avg=1095.96, stdev=4823.47 00:10:22.118 clat percentiles (usec): 00:10:22.118 | 1.00th=[ 330], 5.00th=[ 388], 10.00th=[ 412], 20.00th=[ 437], 00:10:22.118 | 30.00th=[ 449], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 490], 00:10:22.118 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 603], 00:10:22.118 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:22.118 | 99.99th=[41157] 00:10:22.118 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:10:22.118 slat (nsec): min=6262, max=58621, avg=14200.57, stdev=8596.16 00:10:22.118 clat (usec): min=187, max=438, avg=227.84, stdev=29.67 00:10:22.118 lat (usec): min=195, max=455, avg=242.04, stdev=34.22 00:10:22.118 clat percentiles (usec): 00:10:22.118 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:10:22.118 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:10:22.118 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 277], 00:10:22.118 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 441], 00:10:22.118 | 99.99th=[ 441] 00:10:22.118 bw ( KiB/s): min= 4096, max= 4096, per=26.33%, avg=4096.00, stdev= 0.00, samples=2 00:10:22.118 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:22.118 lat (usec) : 250=52.18%, 500=34.42%, 750=12.77% 00:10:22.118 lat (msec) : 20=0.06%, 50=0.58% 00:10:22.118 cpu : usr=1.86%, sys=2.73%, ctx=1726, majf=0, minf=1 00:10:22.118 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.118 issued rwts: total=699,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.118 job2: (groupid=0, jobs=1): err= 0: pid=1660827: Tue Jul 23 06:06:15 2024 00:10:22.118 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:22.118 slat (nsec): min=11615, max=79227, avg=32184.86, stdev=6976.00 00:10:22.118 clat (usec): min=370, max=42545, avg=1196.14, stdev=5375.24 00:10:22.118 lat (usec): min=387, max=42559, avg=1228.32, stdev=5373.30 00:10:22.118 clat percentiles (usec): 00:10:22.118 | 1.00th=[ 379], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 441], 00:10:22.118 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 478], 00:10:22.118 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 586], 95.00th=[ 644], 00:10:22.118 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:10:22.118 | 99.99th=[42730] 00:10:22.118 write: IOPS=913, BW=3652KiB/s (3740kB/s)(3656KiB/1001msec); 0 zone resets 00:10:22.119 slat (nsec): min=6594, max=73667, avg=23546.68, stdev=13540.95 00:10:22.119 clat (usec): min=250, max=706, avg=369.94, stdev=59.68 00:10:22.119 lat (usec): min=258, max=726, avg=393.48, stdev=66.72 00:10:22.119 clat percentiles (usec): 00:10:22.119 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 314], 00:10:22.119 | 30.00th=[ 330], 40.00th=[ 359], 50.00th=[ 379], 60.00th=[ 388], 00:10:22.119 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 449], 95.00th=[ 474], 00:10:22.119 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 709], 99.95th=[ 709], 00:10:22.119 | 99.99th=[ 709] 00:10:22.119 bw ( KiB/s): min= 4096, max= 4096, per=26.33%, avg=4096.00, stdev= 0.00, samples=1 00:10:22.119 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:22.119 lat (usec) : 500=93.06%, 750=6.24%, 1000=0.07% 00:10:22.119 lat (msec) : 50=0.63% 00:10:22.119 cpu : usr=2.40%, sys=3.40%, ctx=1428, majf=0, minf=2 00:10:22.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.119 issued rwts: total=512,914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.119 job3: (groupid=0, jobs=1): err= 0: pid=1660828: Tue Jul 23 06:06:15 2024 00:10:22.119 read: IOPS=520, BW=2081KiB/s (2131kB/s)(2096KiB/1007msec) 00:10:22.119 slat (nsec): min=4948, max=65193, avg=14995.23, stdev=9836.15 00:10:22.119 clat (usec): min=306, max=41368, avg=1357.35, stdev=6192.82 00:10:22.119 lat (usec): min=312, max=41375, avg=1372.34, stdev=6195.32 00:10:22.119 clat percentiles (usec): 00:10:22.119 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:10:22.119 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 379], 00:10:22.119 | 70.00th=[ 388], 80.00th=[ 408], 90.00th=[ 453], 95.00th=[ 529], 00:10:22.119 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:22.119 | 99.99th=[41157] 00:10:22.119 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:10:22.119 slat (nsec): min=6204, max=55800, avg=11505.24, stdev=6756.56 00:10:22.119 clat (usec): min=191, max=528, avg=261.63, stdev=52.28 00:10:22.119 lat (usec): min=199, max=563, avg=273.14, stdev=53.35 00:10:22.119 clat percentiles (usec): 00:10:22.119 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:10:22.119 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:10:22.119 | 70.00th=[ 265], 80.00th=[ 293], 90.00th=[ 347], 95.00th=[ 379], 00:10:22.119 | 99.00th=[ 404], 99.50th=[ 449], 99.90th=[ 529], 99.95th=[ 529], 00:10:22.119 | 99.99th=[ 529] 00:10:22.119 bw ( KiB/s): min= 368, max= 7824, per=26.33%, avg=4096.00, stdev=5272.19, samples=2 00:10:22.119 iops : min= 92, max= 1956, avg=1024.00, stdev=1318.05, samples=2 00:10:22.119 lat (usec) : 250=38.50%, 500=59.24%, 750=1.42% 00:10:22.119 lat (msec) : 50=0.84% 00:10:22.119 cpu : usr=0.70%, sys=2.29%, ctx=1549, majf=0, minf=1 00:10:22.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.119 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.119 00:10:22.119 Run status group 0 (all jobs): 00:10:22.119 READ: bw=9826KiB/s (10.1MB/s), 2046KiB/s-3095KiB/s (2095kB/s-3169kB/s), io=9.84MiB (10.3MB), run=1001-1025msec 00:10:22.119 WRITE: bw=15.2MiB/s (15.9MB/s), 3652KiB/s-4068KiB/s (3740kB/s-4165kB/s), io=15.6MiB (16.3MB), run=1001-1025msec 00:10:22.119 00:10:22.119 Disk stats (read/write): 00:10:22.119 nvme0n1: ios=829/1024, merge=0/0, ticks=602/242, in_queue=844, util=87.37% 00:10:22.119 nvme0n2: ios=702/1024, merge=0/0, ticks=1484/225, in_queue=1709, util=89.52% 00:10:22.119 nvme0n3: ios=537/567, merge=0/0, ticks=1517/200, in_queue=1717, util=93.42% 00:10:22.119 nvme0n4: ios=577/1024, merge=0/0, ticks=628/261, in_queue=889, util=95.90% 00:10:22.119 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:22.119 [global] 00:10:22.119 thread=1 00:10:22.119 invalidate=1 00:10:22.119 rw=randwrite 00:10:22.119 time_based=1 00:10:22.119 runtime=1 00:10:22.119 ioengine=libaio 00:10:22.119 direct=1 00:10:22.119 bs=4096 00:10:22.119 iodepth=1 00:10:22.119 norandommap=0 00:10:22.119 numjobs=1 00:10:22.119 00:10:22.119 verify_dump=1 00:10:22.119 verify_backlog=512 00:10:22.119 verify_state_save=0 00:10:22.119 do_verify=1 00:10:22.119 verify=crc32c-intel 00:10:22.119 [job0] 00:10:22.119 filename=/dev/nvme0n1 00:10:22.119 [job1] 00:10:22.119 filename=/dev/nvme0n2 00:10:22.119 [job2] 00:10:22.119 filename=/dev/nvme0n3 00:10:22.119 [job3] 00:10:22.119 filename=/dev/nvme0n4 00:10:22.119 Could not set queue depth (nvme0n1) 00:10:22.119 Could not set queue depth (nvme0n2) 00:10:22.119 Could not set queue depth (nvme0n3) 00:10:22.119 Could not set queue depth (nvme0n4) 00:10:22.375 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.375 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.375 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.375 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.375 fio-3.35 00:10:22.375 Starting 4 threads 00:10:23.745 00:10:23.745 job0: (groupid=0, jobs=1): err= 0: pid=1661063: Tue Jul 23 06:06:16 2024 00:10:23.745 read: IOPS=827, BW=3309KiB/s (3388kB/s)(3312KiB/1001msec) 00:10:23.745 slat (nsec): min=6080, max=59797, avg=15870.86, stdev=5088.14 00:10:23.745 clat (usec): min=308, max=41221, avg=815.65, stdev=4215.95 00:10:23.745 lat (usec): min=317, max=41229, avg=831.52, stdev=4216.16 00:10:23.745 clat percentiles (usec): 00:10:23.745 | 1.00th=[ 314], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 347], 00:10:23.745 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 367], 00:10:23.745 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 457], 95.00th=[ 482], 00:10:23.745 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:23.745 | 99.99th=[41157] 00:10:23.745 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:23.745 slat (nsec): min=7296, max=58683, avg=17278.39, stdev=8353.61 00:10:23.745 clat (usec): min=201, max=1406, avg=277.93, stdev=79.77 00:10:23.745 lat (usec): min=209, max=1420, avg=295.21, stdev=80.55 00:10:23.745 clat percentiles (usec): 00:10:23.745 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:10:23.745 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:10:23.745 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 359], 95.00th=[ 400], 00:10:23.745 | 99.00th=[ 486], 99.50th=[ 570], 99.90th=[ 1287], 99.95th=[ 1401], 00:10:23.745 | 99.99th=[ 1401] 00:10:23.745 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.745 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.745 lat (usec) : 250=22.79%, 500=75.27%, 750=1.24%, 1000=0.05% 00:10:23.745 lat (msec) : 2=0.16%, 50=0.49% 00:10:23.745 cpu : usr=2.50%, sys=4.10%, ctx=1852, majf=0, minf=2 00:10:23.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.745 issued rwts: total=828,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.745 job1: (groupid=0, jobs=1): err= 0: pid=1661074: Tue Jul 23 06:06:16 2024 00:10:23.745 read: IOPS=734, BW=2940KiB/s (3010kB/s)(2984KiB/1015msec) 00:10:23.745 slat (nsec): min=5397, max=72491, avg=19633.47, stdev=10791.69 00:10:23.745 clat (usec): min=306, max=41393, avg=925.73, stdev=4430.28 00:10:23.746 lat (usec): min=319, max=41411, avg=945.36, stdev=4431.44 00:10:23.746 clat percentiles (usec): 00:10:23.746 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 338], 20.00th=[ 351], 00:10:23.746 | 30.00th=[ 363], 40.00th=[ 424], 50.00th=[ 453], 60.00th=[ 469], 00:10:23.746 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 537], 95.00th=[ 570], 00:10:23.746 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:23.746 | 99.99th=[41157] 00:10:23.746 write: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec); 0 zone resets 00:10:23.746 slat (nsec): min=7090, max=56274, avg=16568.95, stdev=7722.39 00:10:23.746 clat (usec): min=191, max=1111, avg=276.87, stdev=52.12 00:10:23.746 lat (usec): min=202, max=1118, avg=293.44, stdev=51.08 00:10:23.746 clat percentiles (usec): 00:10:23.746 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:10:23.746 | 30.00th=[ 251], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:10:23.746 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 359], 00:10:23.746 | 99.00th=[ 420], 99.50th=[ 457], 99.90th=[ 717], 99.95th=[ 1106], 00:10:23.746 | 99.99th=[ 1106] 00:10:23.746 bw ( KiB/s): min= 1472, max= 6720, per=29.31%, avg=4096.00, stdev=3710.90, samples=2 00:10:23.746 iops : min= 368, max= 1680, avg=1024.00, stdev=927.72, samples=2 00:10:23.746 lat (usec) : 250=16.38%, 500=75.31%, 750=7.68% 00:10:23.746 lat (msec) : 2=0.11%, 50=0.51% 00:10:23.746 cpu : usr=1.78%, sys=3.94%, ctx=1771, majf=0, minf=1 00:10:23.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.746 issued rwts: total=746,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.746 job2: (groupid=0, jobs=1): err= 0: pid=1661106: Tue Jul 23 06:06:16 2024 00:10:23.746 read: IOPS=608, BW=2433KiB/s (2492kB/s)(2460KiB/1011msec) 00:10:23.746 slat (nsec): min=5592, max=67310, avg=22490.31, stdev=11507.78 00:10:23.746 clat (usec): min=318, max=41245, avg=1133.14, stdev=5206.59 00:10:23.746 lat (usec): min=328, max=41301, avg=1155.63, stdev=5207.23 00:10:23.746 clat percentiles (usec): 00:10:23.746 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 351], 20.00th=[ 379], 00:10:23.746 | 30.00th=[ 404], 40.00th=[ 420], 50.00th=[ 437], 60.00th=[ 453], 00:10:23.746 | 70.00th=[ 469], 80.00th=[ 494], 90.00th=[ 523], 95.00th=[ 562], 00:10:23.746 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:23.746 | 99.99th=[41157] 00:10:23.746 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:10:23.746 slat (nsec): min=7758, max=61628, avg=17759.89, stdev=10901.63 00:10:23.746 clat (usec): min=198, max=580, avg=266.69, stdev=56.21 00:10:23.746 lat (usec): min=215, max=620, avg=284.45, stdev=62.62 00:10:23.746 clat percentiles (usec): 00:10:23.746 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:10:23.746 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 262], 00:10:23.746 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 367], 95.00th=[ 392], 00:10:23.746 | 99.00th=[ 424], 99.50th=[ 441], 99.90th=[ 453], 99.95th=[ 578], 00:10:23.746 | 99.99th=[ 578] 00:10:23.746 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=2 00:10:23.746 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:23.746 lat (usec) : 250=34.35%, 500=59.61%, 750=5.31% 00:10:23.746 lat (msec) : 2=0.06%, 50=0.67% 00:10:23.746 cpu : usr=1.68%, sys=3.47%, ctx=1641, majf=0, minf=1 00:10:23.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.746 issued rwts: total=615,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.746 job3: (groupid=0, jobs=1): err= 0: pid=1661119: Tue Jul 23 06:06:16 2024 00:10:23.746 read: IOPS=75, BW=300KiB/s (307kB/s)(308KiB/1026msec) 00:10:23.746 slat (nsec): min=8318, max=44496, avg=15023.38, stdev=10091.69 00:10:23.746 clat (usec): min=329, max=41458, avg=11434.68, stdev=18213.75 00:10:23.746 lat (usec): min=339, max=41476, avg=11449.70, stdev=18221.93 00:10:23.746 clat percentiles (usec): 00:10:23.746 | 1.00th=[ 330], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 343], 00:10:23.746 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:10:23.746 | 70.00th=[ 375], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:23.746 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:23.746 | 99.99th=[41681] 00:10:23.746 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:23.746 slat (nsec): min=7834, max=64116, avg=11348.40, stdev=5715.25 00:10:23.746 clat (usec): min=211, max=510, avg=266.25, stdev=45.02 00:10:23.746 lat (usec): min=219, max=527, avg=277.60, stdev=45.82 00:10:23.746 clat percentiles (usec): 00:10:23.746 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 237], 00:10:23.746 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 262], 00:10:23.746 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 326], 95.00th=[ 383], 00:10:23.746 | 99.00th=[ 416], 99.50th=[ 461], 99.90th=[ 510], 99.95th=[ 510], 00:10:23.746 | 99.99th=[ 510] 00:10:23.746 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.746 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.746 lat (usec) : 250=37.69%, 500=58.40%, 750=0.34% 00:10:23.746 lat (msec) : 50=3.57% 00:10:23.746 cpu : usr=0.49%, sys=0.78%, ctx=590, majf=0, minf=1 00:10:23.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.746 issued rwts: total=77,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.746 00:10:23.746 Run status group 0 (all jobs): 00:10:23.746 READ: bw=8834KiB/s (9046kB/s), 300KiB/s-3309KiB/s (307kB/s-3388kB/s), io=9064KiB (9282kB), run=1001-1026msec 00:10:23.746 WRITE: bw=13.6MiB/s (14.3MB/s), 1996KiB/s-4092KiB/s (2044kB/s-4190kB/s), io=14.0MiB (14.7MB), run=1001-1026msec 00:10:23.746 00:10:23.746 Disk stats (read/write): 00:10:23.746 nvme0n1: ios=573/1024, merge=0/0, ticks=561/257, in_queue=818, util=86.77% 00:10:23.746 nvme0n2: ios=691/1024, merge=0/0, ticks=1379/267, in_queue=1646, util=89.23% 00:10:23.746 nvme0n3: ios=584/1024, merge=0/0, ticks=1029/257, in_queue=1286, util=93.19% 00:10:23.746 nvme0n4: ios=40/512, merge=0/0, ticks=1601/137, in_queue=1738, util=94.29% 00:10:23.746 06:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:23.746 [global] 00:10:23.746 thread=1 00:10:23.746 invalidate=1 00:10:23.746 rw=write 00:10:23.746 time_based=1 00:10:23.746 runtime=1 00:10:23.746 ioengine=libaio 00:10:23.746 direct=1 00:10:23.746 bs=4096 00:10:23.746 iodepth=128 00:10:23.746 norandommap=0 00:10:23.746 numjobs=1 00:10:23.746 00:10:23.746 verify_dump=1 00:10:23.746 verify_backlog=512 00:10:23.746 verify_state_save=0 00:10:23.746 do_verify=1 00:10:23.746 verify=crc32c-intel 00:10:23.746 [job0] 00:10:23.746 filename=/dev/nvme0n1 00:10:23.746 [job1] 00:10:23.746 filename=/dev/nvme0n2 00:10:23.746 [job2] 00:10:23.746 filename=/dev/nvme0n3 00:10:23.746 [job3] 00:10:23.746 filename=/dev/nvme0n4 00:10:23.746 Could not set queue depth (nvme0n1) 00:10:23.746 Could not set queue depth (nvme0n2) 00:10:23.746 Could not set queue depth (nvme0n3) 00:10:23.746 Could not set queue depth (nvme0n4) 00:10:23.746 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.746 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.746 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.746 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.746 fio-3.35 00:10:23.746 Starting 4 threads 00:10:25.119 00:10:25.119 job0: (groupid=0, jobs=1): err= 0: pid=1661413: Tue Jul 23 06:06:18 2024 00:10:25.119 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:10:25.119 slat (usec): min=3, max=14949, avg=107.90, stdev=823.67 00:10:25.119 clat (usec): min=6047, max=40903, avg=14872.57, stdev=5819.91 00:10:25.119 lat (usec): min=6059, max=40938, avg=14980.47, stdev=5888.32 00:10:25.119 clat percentiles (usec): 00:10:25.119 | 1.00th=[ 6915], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:10:25.119 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12387], 60.00th=[14091], 00:10:25.119 | 70.00th=[16319], 80.00th=[20055], 90.00th=[23725], 95.00th=[27919], 00:10:25.119 | 99.00th=[29754], 99.50th=[34341], 99.90th=[36439], 99.95th=[38536], 00:10:25.119 | 99.99th=[41157] 00:10:25.119 write: IOPS=4933, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1007msec); 0 zone resets 00:10:25.119 slat (usec): min=4, max=12831, avg=88.73, stdev=651.01 00:10:25.119 clat (usec): min=1070, max=39118, avg=11863.79, stdev=5478.61 00:10:25.119 lat (usec): min=1110, max=39137, avg=11952.52, stdev=5499.21 00:10:25.119 clat percentiles (usec): 00:10:25.119 | 1.00th=[ 3228], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7767], 00:10:25.119 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[11338], 60.00th=[11994], 00:10:25.119 | 70.00th=[13042], 80.00th=[14091], 90.00th=[17957], 95.00th=[20841], 00:10:25.119 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:10:25.119 | 99.99th=[39060] 00:10:25.119 bw ( KiB/s): min=18248, max=20472, per=32.27%, avg=19360.00, stdev=1572.61, samples=2 00:10:25.119 iops : min= 4562, max= 5118, avg=4840.00, stdev=393.15, samples=2 00:10:25.119 lat (msec) : 2=0.16%, 4=0.62%, 10=25.14%, 20=61.21%, 50=12.89% 00:10:25.119 cpu : usr=7.75%, sys=10.14%, ctx=291, majf=0, minf=1 00:10:25.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:25.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.119 issued rwts: total=4608,4968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.119 job1: (groupid=0, jobs=1): err= 0: pid=1661414: Tue Jul 23 06:06:18 2024 00:10:25.119 read: IOPS=4951, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1007msec) 00:10:25.119 slat (usec): min=2, max=9560, avg=95.46, stdev=669.17 00:10:25.119 clat (usec): min=3140, max=39018, avg=13445.97, stdev=4620.34 00:10:25.119 lat (usec): min=5699, max=39032, avg=13541.42, stdev=4656.58 00:10:25.119 clat percentiles (usec): 00:10:25.119 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9765], 00:10:25.119 | 30.00th=[10552], 40.00th=[12125], 50.00th=[12649], 60.00th=[13304], 00:10:25.119 | 70.00th=[14615], 80.00th=[15926], 90.00th=[18482], 95.00th=[21890], 00:10:25.119 | 99.00th=[33817], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:10:25.119 | 99.99th=[39060] 00:10:25.119 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:25.119 slat (usec): min=4, max=20885, avg=90.84, stdev=705.41 00:10:25.119 clat (usec): min=1738, max=39030, avg=11850.17, stdev=4873.57 00:10:25.119 lat (usec): min=1755, max=39046, avg=11941.01, stdev=4898.76 00:10:25.119 clat percentiles (usec): 00:10:25.119 | 1.00th=[ 5014], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 7701], 00:10:25.119 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[11076], 60.00th=[12256], 00:10:25.119 | 70.00th=[13566], 80.00th=[15270], 90.00th=[17433], 95.00th=[20841], 00:10:25.119 | 99.00th=[30278], 99.50th=[30540], 99.90th=[33817], 99.95th=[33817], 00:10:25.119 | 99.99th=[39060] 00:10:25.119 bw ( KiB/s): min=20480, max=20480, per=34.13%, avg=20480.00, stdev= 0.00, samples=2 00:10:25.119 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:25.119 lat (msec) : 2=0.09%, 4=0.07%, 10=32.93%, 20=60.78%, 50=6.13% 00:10:25.119 cpu : usr=8.35%, sys=10.34%, ctx=297, majf=0, minf=1 00:10:25.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:25.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.119 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.119 job2: (groupid=0, jobs=1): err= 0: pid=1661416: Tue Jul 23 06:06:18 2024 00:10:25.119 read: IOPS=1102, BW=4409KiB/s (4515kB/s)(4444KiB/1008msec) 00:10:25.119 slat (usec): min=2, max=34618, avg=452.25, stdev=2962.44 00:10:25.119 clat (usec): min=1490, max=106074, avg=58032.29, stdev=24450.24 00:10:25.119 lat (msec): min=13, max=106, avg=58.48, stdev=24.57 00:10:25.119 clat percentiles (msec): 00:10:25.119 | 1.00th=[ 16], 5.00th=[ 21], 10.00th=[ 32], 20.00th=[ 35], 00:10:25.119 | 30.00th=[ 41], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 68], 00:10:25.119 | 70.00th=[ 69], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 100], 00:10:25.119 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:10:25.119 | 99.99th=[ 107] 00:10:25.119 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:10:25.119 slat (usec): min=3, max=22328, avg=318.30, stdev=1980.37 00:10:25.119 clat (msec): min=9, max=116, avg=39.76, stdev=14.91 00:10:25.119 lat (msec): min=9, max=116, avg=40.08, stdev=15.01 00:10:25.119 clat percentiles (msec): 00:10:25.119 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 20], 20.00th=[ 27], 00:10:25.119 | 30.00th=[ 33], 40.00th=[ 37], 50.00th=[ 42], 60.00th=[ 45], 00:10:25.119 | 70.00th=[ 47], 80.00th=[ 51], 90.00th=[ 58], 95.00th=[ 65], 00:10:25.119 | 99.00th=[ 80], 99.50th=[ 86], 99.90th=[ 102], 99.95th=[ 117], 00:10:25.119 | 99.99th=[ 117] 00:10:25.119 bw ( KiB/s): min= 5928, max= 6032, per=9.97%, avg=5980.00, stdev=73.54, samples=2 00:10:25.119 iops : min= 1482, max= 1508, avg=1495.00, stdev=18.38, samples=2 00:10:25.119 lat (msec) : 2=0.04%, 10=0.57%, 20=9.86%, 50=51.23%, 100=36.76% 00:10:25.119 lat (msec) : 250=1.55% 00:10:25.119 cpu : usr=1.39%, sys=2.68%, ctx=91, majf=0, minf=1 00:10:25.119 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:10:25.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.119 issued rwts: total=1111,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.119 job3: (groupid=0, jobs=1): err= 0: pid=1661417: Tue Jul 23 06:06:18 2024 00:10:25.119 read: IOPS=3821, BW=14.9MiB/s (15.7MB/s)(15.6MiB/1048msec) 00:10:25.119 slat (usec): min=2, max=27574, avg=131.93, stdev=1018.93 00:10:25.119 clat (usec): min=1166, max=102054, avg=18898.07, stdev=15673.24 00:10:25.119 lat (usec): min=1199, max=123681, avg=19030.00, stdev=15778.72 00:10:25.119 clat percentiles (msec): 00:10:25.119 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12], 00:10:25.119 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:10:25.119 | 70.00th=[ 17], 80.00th=[ 25], 90.00th=[ 33], 95.00th=[ 53], 00:10:25.119 | 99.00th=[ 102], 99.50th=[ 103], 99.90th=[ 103], 99.95th=[ 103], 00:10:25.119 | 99.99th=[ 103] 00:10:25.119 write: IOPS=3908, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1048msec); 0 zone resets 00:10:25.119 slat (usec): min=3, max=16275, avg=99.08, stdev=686.93 00:10:25.119 clat (usec): min=572, max=31308, avg=13371.52, stdev=4445.75 00:10:25.119 lat (usec): min=690, max=31320, avg=13470.60, stdev=4474.17 00:10:25.119 clat percentiles (usec): 00:10:25.119 | 1.00th=[ 2933], 5.00th=[ 5932], 10.00th=[ 8717], 20.00th=[10552], 00:10:25.119 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13173], 60.00th=[13566], 00:10:25.119 | 70.00th=[14091], 80.00th=[15533], 90.00th=[20841], 95.00th=[22152], 00:10:25.119 | 99.00th=[24773], 99.50th=[26870], 99.90th=[27657], 99.95th=[28443], 00:10:25.119 | 99.99th=[31327] 00:10:25.120 bw ( KiB/s): min=16336, max=16432, per=27.31%, avg=16384.00, stdev=67.88, samples=2 00:10:25.120 iops : min= 4084, max= 4108, avg=4096.00, stdev=16.97, samples=2 00:10:25.120 lat (usec) : 750=0.01%, 1000=0.01% 00:10:25.120 lat (msec) : 2=0.20%, 4=1.73%, 10=13.00%, 20=69.00%, 50=12.86% 00:10:25.120 lat (msec) : 100=2.67%, 250=0.52% 00:10:25.120 cpu : usr=4.49%, sys=6.11%, ctx=404, majf=0, minf=1 00:10:25.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:25.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.120 issued rwts: total=4005,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.120 00:10:25.120 Run status group 0 (all jobs): 00:10:25.120 READ: bw=54.8MiB/s (57.5MB/s), 4409KiB/s-19.3MiB/s (4515kB/s-20.3MB/s), io=57.5MiB (60.3MB), run=1007-1048msec 00:10:25.120 WRITE: bw=58.6MiB/s (61.4MB/s), 6095KiB/s-19.9MiB/s (6242kB/s-20.8MB/s), io=61.4MiB (64.4MB), run=1007-1048msec 00:10:25.120 00:10:25.120 Disk stats (read/write): 00:10:25.120 nvme0n1: ios=3624/4096, merge=0/0, ticks=54173/48179, in_queue=102352, util=96.19% 00:10:25.120 nvme0n2: ios=4145/4519, merge=0/0, ticks=49977/49100, in_queue=99077, util=88.62% 00:10:25.120 nvme0n3: ios=1047/1112, merge=0/0, ticks=20197/15273, in_queue=35470, util=96.76% 00:10:25.120 nvme0n4: ios=3642/4056, merge=0/0, ticks=28817/29814, in_queue=58631, util=97.58% 00:10:25.120 06:06:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:25.120 [global] 00:10:25.120 thread=1 00:10:25.120 invalidate=1 00:10:25.120 rw=randwrite 00:10:25.120 time_based=1 00:10:25.120 runtime=1 00:10:25.120 ioengine=libaio 00:10:25.120 direct=1 00:10:25.120 bs=4096 00:10:25.120 iodepth=128 00:10:25.120 norandommap=0 00:10:25.120 numjobs=1 00:10:25.120 00:10:25.120 verify_dump=1 00:10:25.120 verify_backlog=512 00:10:25.120 verify_state_save=0 00:10:25.120 do_verify=1 00:10:25.120 verify=crc32c-intel 00:10:25.120 [job0] 00:10:25.120 filename=/dev/nvme0n1 00:10:25.120 [job1] 00:10:25.120 filename=/dev/nvme0n2 00:10:25.120 [job2] 00:10:25.120 filename=/dev/nvme0n3 00:10:25.120 [job3] 00:10:25.120 filename=/dev/nvme0n4 00:10:25.120 Could not set queue depth (nvme0n1) 00:10:25.120 Could not set queue depth (nvme0n2) 00:10:25.120 Could not set queue depth (nvme0n3) 00:10:25.120 Could not set queue depth (nvme0n4) 00:10:25.376 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.376 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.376 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.376 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.376 fio-3.35 00:10:25.376 Starting 4 threads 00:10:26.748 00:10:26.748 job0: (groupid=0, jobs=1): err= 0: pid=1661644: Tue Jul 23 06:06:19 2024 00:10:26.748 read: IOPS=2724, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1007msec) 00:10:26.748 slat (usec): min=2, max=26186, avg=166.41, stdev=1068.65 00:10:26.748 clat (usec): min=5785, max=67100, avg=19272.97, stdev=8093.47 00:10:26.748 lat (usec): min=8422, max=67137, avg=19439.37, stdev=8169.92 00:10:26.748 clat percentiles (usec): 00:10:26.748 | 1.00th=[ 9896], 5.00th=[11207], 10.00th=[11600], 20.00th=[12911], 00:10:26.748 | 30.00th=[14353], 40.00th=[17433], 50.00th=[18482], 60.00th=[20317], 00:10:26.748 | 70.00th=[20841], 80.00th=[21890], 90.00th=[24511], 95.00th=[33817], 00:10:26.748 | 99.00th=[63177], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:10:26.748 | 99.99th=[66847] 00:10:26.748 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:10:26.748 slat (usec): min=3, max=25746, avg=170.38, stdev=1125.09 00:10:26.748 clat (usec): min=7628, max=73530, avg=23936.91, stdev=11819.03 00:10:26.748 lat (usec): min=7640, max=73586, avg=24107.29, stdev=11896.76 00:10:26.748 clat percentiles (usec): 00:10:26.748 | 1.00th=[ 8717], 5.00th=[10683], 10.00th=[11994], 20.00th=[16450], 00:10:26.748 | 30.00th=[17957], 40.00th=[18744], 50.00th=[20579], 60.00th=[21890], 00:10:26.748 | 70.00th=[24773], 80.00th=[31589], 90.00th=[40633], 95.00th=[47973], 00:10:26.748 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:10:26.748 | 99.99th=[73925] 00:10:26.748 bw ( KiB/s): min=12288, max=12288, per=20.94%, avg=12288.00, stdev= 0.00, samples=2 00:10:26.748 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:26.748 lat (msec) : 10=1.93%, 20=50.72%, 50=44.52%, 100=2.84% 00:10:26.748 cpu : usr=2.19%, sys=4.27%, ctx=323, majf=0, minf=13 00:10:26.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:26.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.748 issued rwts: total=2744,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.748 job1: (groupid=0, jobs=1): err= 0: pid=1661645: Tue Jul 23 06:06:19 2024 00:10:26.748 read: IOPS=2277, BW=9110KiB/s (9328kB/s)(9128KiB/1002msec) 00:10:26.748 slat (usec): min=3, max=25814, avg=201.43, stdev=1206.55 00:10:26.748 clat (usec): min=732, max=73105, avg=24016.23, stdev=10379.01 00:10:26.748 lat (usec): min=3983, max=73145, avg=24217.66, stdev=10473.57 00:10:26.748 clat percentiles (usec): 00:10:26.748 | 1.00th=[ 4178], 5.00th=[16319], 10.00th=[17957], 20.00th=[18482], 00:10:26.748 | 30.00th=[19530], 40.00th=[20317], 50.00th=[21103], 60.00th=[21627], 00:10:26.748 | 70.00th=[23462], 80.00th=[25297], 90.00th=[34866], 95.00th=[53216], 00:10:26.748 | 99.00th=[65799], 99.50th=[65799], 99.90th=[65799], 99.95th=[72877], 00:10:26.748 | 99.99th=[72877] 00:10:26.748 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:10:26.748 slat (usec): min=3, max=25644, avg=202.46, stdev=1071.61 00:10:26.748 clat (usec): min=9340, max=75907, avg=28048.83, stdev=14683.72 00:10:26.748 lat (usec): min=9352, max=75941, avg=28251.29, stdev=14773.28 00:10:26.748 clat percentiles (usec): 00:10:26.748 | 1.00th=[10290], 5.00th=[10814], 10.00th=[11338], 20.00th=[17433], 00:10:26.748 | 30.00th=[21627], 40.00th=[22152], 50.00th=[23725], 60.00th=[25297], 00:10:26.748 | 70.00th=[29492], 80.00th=[37487], 90.00th=[49546], 95.00th=[64750], 00:10:26.748 | 99.00th=[71828], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:10:26.748 | 99.99th=[76022] 00:10:26.748 bw ( KiB/s): min= 8192, max=12288, per=17.45%, avg=10240.00, stdev=2896.31, samples=2 00:10:26.748 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:26.748 lat (usec) : 750=0.02% 00:10:26.748 lat (msec) : 4=0.04%, 10=0.97%, 20=26.89%, 50=65.06%, 100=7.02% 00:10:26.748 cpu : usr=3.00%, sys=4.40%, ctx=346, majf=0, minf=13 00:10:26.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:26.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.748 issued rwts: total=2282,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.748 job2: (groupid=0, jobs=1): err= 0: pid=1661646: Tue Jul 23 06:06:19 2024 00:10:26.748 read: IOPS=3544, BW=13.8MiB/s (14.5MB/s)(14.5MiB/1047msec) 00:10:26.748 slat (usec): min=3, max=13026, avg=117.88, stdev=827.99 00:10:26.748 clat (usec): min=3452, max=60356, avg=16282.93, stdev=9682.89 00:10:26.748 lat (usec): min=3459, max=60361, avg=16400.82, stdev=9729.01 00:10:26.748 clat percentiles (usec): 00:10:26.748 | 1.00th=[ 5014], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[12256], 00:10:26.748 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[14091], 00:10:26.748 | 70.00th=[14615], 80.00th=[18482], 90.00th=[23462], 95.00th=[39060], 00:10:26.748 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60556], 99.95th=[60556], 00:10:26.748 | 99.99th=[60556] 00:10:26.748 write: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1047msec); 0 zone resets 00:10:26.749 slat (usec): min=4, max=20701, avg=119.12, stdev=686.22 00:10:26.749 clat (usec): min=857, max=55930, avg=17669.31, stdev=10193.52 00:10:26.749 lat (usec): min=883, max=55938, avg=17788.43, stdev=10246.23 00:10:26.749 clat percentiles (usec): 00:10:26.749 | 1.00th=[ 1958], 5.00th=[ 5473], 10.00th=[ 7373], 20.00th=[ 9634], 00:10:26.749 | 30.00th=[11207], 40.00th=[13042], 50.00th=[13435], 60.00th=[16057], 00:10:26.749 | 70.00th=[22152], 80.00th=[27395], 90.00th=[32113], 95.00th=[34866], 00:10:26.749 | 99.00th=[47973], 99.50th=[49021], 99.90th=[49546], 99.95th=[50070], 00:10:26.749 | 99.99th=[55837] 00:10:26.749 bw ( KiB/s): min=12528, max=20232, per=27.91%, avg=16380.00, stdev=5447.55, samples=2 00:10:26.749 iops : min= 3132, max= 5058, avg=4095.00, stdev=1361.89, samples=2 00:10:26.749 lat (usec) : 1000=0.10% 00:10:26.749 lat (msec) : 2=0.50%, 4=1.65%, 10=13.18%, 20=60.48%, 50=22.94% 00:10:26.749 lat (msec) : 100=1.14% 00:10:26.749 cpu : usr=5.07%, sys=6.98%, ctx=396, majf=0, minf=11 00:10:26.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:26.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.749 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.749 job3: (groupid=0, jobs=1): err= 0: pid=1661647: Tue Jul 23 06:06:19 2024 00:10:26.749 read: IOPS=5345, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1005msec) 00:10:26.749 slat (usec): min=2, max=11370, avg=97.51, stdev=672.60 00:10:26.749 clat (usec): min=2098, max=23948, avg=12564.10, stdev=3082.07 00:10:26.749 lat (usec): min=5293, max=25494, avg=12661.62, stdev=3117.60 00:10:26.749 clat percentiles (usec): 00:10:26.749 | 1.00th=[ 6259], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10290], 00:10:26.749 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11994], 60.00th=[12518], 00:10:26.749 | 70.00th=[13042], 80.00th=[15008], 90.00th=[17171], 95.00th=[19006], 00:10:26.749 | 99.00th=[21627], 99.50th=[22414], 99.90th=[23462], 99.95th=[23462], 00:10:26.749 | 99.99th=[23987] 00:10:26.749 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:26.749 slat (usec): min=4, max=8329, avg=74.83, stdev=355.23 00:10:26.749 clat (usec): min=1628, max=23341, avg=10625.58, stdev=2813.11 00:10:26.749 lat (usec): min=1638, max=23351, avg=10700.41, stdev=2825.77 00:10:26.749 clat percentiles (usec): 00:10:26.749 | 1.00th=[ 4015], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 7767], 00:10:26.749 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[10945], 60.00th=[11731], 00:10:26.749 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[15401], 00:10:26.749 | 99.00th=[17957], 99.50th=[18220], 99.90th=[22414], 99.95th=[23200], 00:10:26.749 | 99.99th=[23462] 00:10:26.749 bw ( KiB/s): min=20480, max=24576, per=38.39%, avg=22528.00, stdev=2896.31, samples=2 00:10:26.749 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:26.749 lat (msec) : 2=0.05%, 4=0.41%, 10=24.13%, 20=73.95%, 50=1.46% 00:10:26.749 cpu : usr=8.07%, sys=10.06%, ctx=617, majf=0, minf=13 00:10:26.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:26.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.749 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.749 00:10:26.749 Run status group 0 (all jobs): 00:10:26.749 READ: bw=52.6MiB/s (55.2MB/s), 9110KiB/s-20.9MiB/s (9328kB/s-21.9MB/s), io=55.1MiB (57.8MB), run=1002-1047msec 00:10:26.749 WRITE: bw=57.3MiB/s (60.1MB/s), 9.98MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=60.0MiB (62.9MB), run=1002-1047msec 00:10:26.749 00:10:26.749 Disk stats (read/write): 00:10:26.749 nvme0n1: ios=2424/2560, merge=0/0, ticks=15848/18102, in_queue=33950, util=96.59% 00:10:26.749 nvme0n2: ios=1892/2048, merge=0/0, ticks=15715/19405, in_queue=35120, util=87.20% 00:10:26.749 nvme0n3: ios=3095/3303, merge=0/0, ticks=45197/59718, in_queue=104915, util=98.12% 00:10:26.749 nvme0n4: ios=4665/4839, merge=0/0, ticks=54322/48409, in_queue=102731, util=98.11% 00:10:26.749 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:26.749 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1661789 00:10:26.749 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:26.749 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:26.749 [global] 00:10:26.749 thread=1 00:10:26.749 invalidate=1 00:10:26.749 rw=read 00:10:26.749 time_based=1 00:10:26.749 runtime=10 00:10:26.749 ioengine=libaio 00:10:26.749 direct=1 00:10:26.749 bs=4096 00:10:26.749 iodepth=1 00:10:26.749 norandommap=1 00:10:26.749 numjobs=1 00:10:26.749 00:10:26.749 [job0] 00:10:26.749 filename=/dev/nvme0n1 00:10:26.749 [job1] 00:10:26.749 filename=/dev/nvme0n2 00:10:26.749 [job2] 00:10:26.749 filename=/dev/nvme0n3 00:10:26.749 [job3] 00:10:26.749 filename=/dev/nvme0n4 00:10:26.749 Could not set queue depth (nvme0n1) 00:10:26.749 Could not set queue depth (nvme0n2) 00:10:26.749 Could not set queue depth (nvme0n3) 00:10:26.749 Could not set queue depth (nvme0n4) 00:10:26.749 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.749 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.749 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.749 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.749 fio-3.35 00:10:26.749 Starting 4 threads 00:10:30.024 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:30.024 06:06:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:30.024 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=13684736, buflen=4096 00:10:30.024 fio: pid=1661880, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:30.024 06:06:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.024 06:06:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:30.281 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=11218944, buflen=4096 00:10:30.281 fio: pid=1661879, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:30.538 06:06:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.538 06:06:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:30.538 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=352256, buflen=4096 00:10:30.538 fio: pid=1661877, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:30.796 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=25665536, buflen=4096 00:10:30.796 fio: pid=1661878, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:30.796 06:06:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.796 06:06:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:30.796 00:10:30.796 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1661877: Tue Jul 23 06:06:23 2024 00:10:30.796 read: IOPS=25, BW=99.5KiB/s (102kB/s)(344KiB/3457msec) 00:10:30.796 slat (usec): min=12, max=19878, avg=337.98, stdev=2280.02 00:10:30.796 clat (usec): min=478, max=41437, avg=39586.07, stdev=7475.39 00:10:30.796 lat (usec): min=496, max=61030, avg=39927.82, stdev=7883.22 00:10:30.796 clat percentiles (usec): 00:10:30.796 | 1.00th=[ 478], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:30.796 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:30.796 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:30.796 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:30.796 | 99.99th=[41681] 00:10:30.796 bw ( KiB/s): min= 96, max= 104, per=0.75%, avg=101.33, stdev= 4.13, samples=6 00:10:30.796 iops : min= 24, max= 26, avg=25.33, stdev= 1.03, samples=6 00:10:30.796 lat (usec) : 500=2.30%, 750=1.15% 00:10:30.796 lat (msec) : 50=95.40% 00:10:30.796 cpu : usr=0.00%, sys=0.09%, ctx=90, majf=0, minf=1 00:10:30.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.796 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.796 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.796 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1661878: Tue Jul 23 06:06:23 2024 00:10:30.796 read: IOPS=1691, BW=6767KiB/s (6929kB/s)(24.5MiB/3704msec) 00:10:30.796 slat (usec): min=5, max=17607, avg=22.02, stdev=309.31 00:10:30.796 clat (usec): min=286, max=41151, avg=564.01, stdev=2614.34 00:10:30.796 lat (usec): min=292, max=41165, avg=586.03, stdev=2632.78 00:10:30.796 clat percentiles (usec): 00:10:30.796 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 322], 00:10:30.796 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 416], 00:10:30.796 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 494], 95.00th=[ 529], 00:10:30.796 | 99.00th=[ 709], 99.50th=[ 1074], 99.90th=[41157], 99.95th=[41157], 00:10:30.796 | 99.99th=[41157] 00:10:30.796 bw ( KiB/s): min= 96, max=11528, per=48.83%, avg=6555.71, stdev=3975.96, samples=7 00:10:30.796 iops : min= 24, max= 2882, avg=1638.86, stdev=993.92, samples=7 00:10:30.796 lat (usec) : 500=91.08%, 750=8.04%, 1000=0.29% 00:10:30.796 lat (msec) : 2=0.13%, 4=0.02%, 20=0.02%, 50=0.41% 00:10:30.796 cpu : usr=0.92%, sys=3.08%, ctx=6274, majf=0, minf=1 00:10:30.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.796 issued rwts: total=6267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.796 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1661879: Tue Jul 23 06:06:23 2024 00:10:30.796 read: IOPS=852, BW=3408KiB/s (3490kB/s)(10.7MiB/3215msec) 00:10:30.796 slat (nsec): min=4912, max=68763, avg=19125.70, stdev=10145.57 00:10:30.796 clat (usec): min=314, max=41052, avg=1141.59, stdev=4969.01 00:10:30.796 lat (usec): min=320, max=41070, avg=1160.72, stdev=4969.36 00:10:30.796 clat percentiles (usec): 00:10:30.796 | 1.00th=[ 326], 5.00th=[ 355], 10.00th=[ 392], 20.00th=[ 502], 00:10:30.796 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:10:30.796 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 611], 00:10:30.796 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:30.796 | 99.99th=[41157] 00:10:30.796 bw ( KiB/s): min= 96, max= 7256, per=27.15%, avg=3645.33, stdev=3240.70, samples=6 00:10:30.796 iops : min= 24, max= 1814, avg=911.33, stdev=810.18, samples=6 00:10:30.796 lat (usec) : 500=19.49%, 750=78.87%, 1000=0.04% 00:10:30.796 lat (msec) : 2=0.04%, 50=1.53% 00:10:30.796 cpu : usr=0.65%, sys=1.90%, ctx=2740, majf=0, minf=1 00:10:30.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.796 issued rwts: total=2740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.796 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1661880: Tue Jul 23 06:06:23 2024 00:10:30.796 read: IOPS=1154, BW=4618KiB/s (4729kB/s)(13.1MiB/2894msec) 00:10:30.796 slat (nsec): min=4488, max=72345, avg=17445.33, stdev=11105.71 00:10:30.796 clat (usec): min=305, max=41727, avg=837.54, stdev=4138.71 00:10:30.796 lat (usec): min=312, max=41740, avg=854.98, stdev=4139.19 00:10:30.796 clat percentiles (usec): 00:10:30.796 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 351], 00:10:30.796 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 416], 00:10:30.796 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 502], 95.00th=[ 545], 00:10:30.796 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:30.796 | 99.99th=[41681] 00:10:30.796 bw ( KiB/s): min= 96, max= 8208, per=32.33%, avg=4340.80, stdev=3115.64, samples=5 00:10:30.796 iops : min= 24, max= 2052, avg=1085.20, stdev=778.91, samples=5 00:10:30.796 lat (usec) : 500=89.92%, 750=8.98% 00:10:30.796 lat (msec) : 20=0.03%, 50=1.05% 00:10:30.796 cpu : usr=0.90%, sys=2.25%, ctx=3342, majf=0, minf=1 00:10:30.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.796 issued rwts: total=3342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.796 00:10:30.796 Run status group 0 (all jobs): 00:10:30.796 READ: bw=13.1MiB/s (13.7MB/s), 99.5KiB/s-6767KiB/s (102kB/s-6929kB/s), io=48.6MiB (50.9MB), run=2894-3704msec 00:10:30.796 00:10:30.796 Disk stats (read/write): 00:10:30.796 nvme0n1: ios=84/0, merge=0/0, ticks=3324/0, in_queue=3324, util=95.19% 00:10:30.796 nvme0n2: ios=6023/0, merge=0/0, ticks=3571/0, in_queue=3571, util=97.91% 00:10:30.796 nvme0n3: ios=2736/0, merge=0/0, ticks=2949/0, in_queue=2949, util=96.79% 00:10:30.796 nvme0n4: ios=3340/0, merge=0/0, ticks=2613/0, in_queue=2613, util=96.75% 00:10:31.060 06:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.060 06:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:31.324 06:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.324 06:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:31.324 06:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.324 06:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:31.889 06:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.889 06:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:31.889 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:31.889 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1661789 00:10:31.889 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:31.889 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:32.146 nvmf hotplug test: fio failed as expected 00:10:32.146 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.404 rmmod nvme_tcp 00:10:32.404 rmmod nvme_fabrics 00:10:32.404 rmmod nvme_keyring 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1659275 ']' 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1659275 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1659275 ']' 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1659275 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1659275 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1659275' 00:10:32.404 killing process with pid 1659275 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1659275 00:10:32.404 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1659275 00:10:32.662 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.662 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.662 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.662 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.662 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.662 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.662 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.662 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:35.192 00:10:35.192 real 0m23.369s 00:10:35.192 user 1m22.566s 00:10:35.192 sys 0m6.457s 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.192 ************************************ 00:10:35.192 END TEST nvmf_fio_target 00:10:35.192 ************************************ 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.192 ************************************ 00:10:35.192 START TEST nvmf_bdevio 00:10:35.192 ************************************ 00:10:35.192 06:06:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:35.192 * Looking for test storage... 00:10:35.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.192 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.193 06:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.093 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:37.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:37.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:37.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:37.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:10:37.094 00:10:37.094 --- 10.0.0.2 ping statistics --- 00:10:37.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.094 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:10:37.094 00:10:37.094 --- 10.0.0.1 ping statistics --- 00:10:37.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.094 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1664509 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1664509 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1664509 ']' 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.094 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 [2024-07-23 06:06:30.289154] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:37.094 [2024-07-23 06:06:30.289238] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.094 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.094 [2024-07-23 06:06:30.327981] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:37.094 [2024-07-23 06:06:30.369050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.352 [2024-07-23 06:06:30.472375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.352 [2024-07-23 06:06:30.472436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.352 [2024-07-23 06:06:30.472476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.352 [2024-07-23 06:06:30.472499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.352 [2024-07-23 06:06:30.472518] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.352 [2024-07-23 06:06:30.472654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:37.352 [2024-07-23 06:06:30.472720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:37.352 [2024-07-23 06:06:30.472794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.352 [2024-07-23 06:06:30.472786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.352 [2024-07-23 06:06:30.651639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.352 Malloc0 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.352 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.610 [2024-07-23 06:06:30.705037] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:37.610 { 00:10:37.610 "params": { 00:10:37.610 "name": "Nvme$subsystem", 00:10:37.610 "trtype": "$TEST_TRANSPORT", 00:10:37.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:37.610 "adrfam": "ipv4", 00:10:37.610 "trsvcid": "$NVMF_PORT", 00:10:37.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:37.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:37.610 "hdgst": ${hdgst:-false}, 00:10:37.610 "ddgst": ${ddgst:-false} 00:10:37.610 }, 00:10:37.610 "method": "bdev_nvme_attach_controller" 00:10:37.610 } 00:10:37.610 EOF 00:10:37.610 )") 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:37.610 06:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:37.610 "params": { 00:10:37.610 "name": "Nvme1", 00:10:37.610 "trtype": "tcp", 00:10:37.610 "traddr": "10.0.0.2", 00:10:37.610 "adrfam": "ipv4", 00:10:37.610 "trsvcid": "4420", 00:10:37.610 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.610 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:37.610 "hdgst": false, 00:10:37.610 "ddgst": false 00:10:37.610 }, 00:10:37.610 "method": "bdev_nvme_attach_controller" 00:10:37.610 }' 00:10:37.610 [2024-07-23 06:06:30.753922] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:37.610 [2024-07-23 06:06:30.753987] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664649 ] 00:10:37.610 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.610 [2024-07-23 06:06:30.785824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:37.610 [2024-07-23 06:06:30.815065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.610 [2024-07-23 06:06:30.905881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.610 [2024-07-23 06:06:30.905928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.610 [2024-07-23 06:06:30.905931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.867 I/O targets: 00:10:37.867 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:37.867 00:10:37.867 00:10:37.867 CUnit - A unit testing framework for C - Version 2.1-3 00:10:37.867 http://cunit.sourceforge.net/ 00:10:37.867 00:10:37.867 00:10:37.867 Suite: bdevio tests on: Nvme1n1 00:10:37.867 Test: blockdev write read block ...passed 00:10:38.125 Test: blockdev write zeroes read block ...passed 00:10:38.125 Test: blockdev write zeroes read no split ...passed 00:10:38.125 Test: blockdev write zeroes read split ...passed 00:10:38.125 Test: blockdev write zeroes read split partial ...passed 00:10:38.125 Test: blockdev reset ...[2024-07-23 06:06:31.343495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:38.125 [2024-07-23 06:06:31.343603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2520940 (9): Bad file descriptor 00:10:38.125 [2024-07-23 06:06:31.397173] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.125 passed 00:10:38.125 Test: blockdev write read 8 blocks ...passed 00:10:38.125 Test: blockdev write read size > 128k ...passed 00:10:38.125 Test: blockdev write read invalid size ...passed 00:10:38.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:38.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:38.382 Test: blockdev write read max offset ...passed 00:10:38.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:38.382 Test: blockdev writev readv 8 blocks ...passed 00:10:38.382 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.382 Test: blockdev writev readv block ...passed 00:10:38.382 Test: blockdev writev readv size > 128k ...passed 00:10:38.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.382 Test: blockdev comparev and writev ...[2024-07-23 06:06:31.614217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.382 [2024-07-23 06:06:31.614253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.614278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.382 [2024-07-23 06:06:31.614295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.614721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.382 [2024-07-23 06:06:31.614747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.614770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.382 [2024-07-23 06:06:31.614787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.615164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.382 [2024-07-23 06:06:31.615188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.615210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.382 [2024-07-23 06:06:31.615227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.615650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.382 [2024-07-23 06:06:31.615674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.615696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.382 [2024-07-23 06:06:31.615712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:38.382 passed 00:10:38.382 Test: blockdev nvme passthru rw ...passed 00:10:38.382 Test: blockdev nvme passthru vendor specific ...[2024-07-23 06:06:31.698052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.382 [2024-07-23 06:06:31.698079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.698281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.382 [2024-07-23 06:06:31.698304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.698499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.382 [2024-07-23 06:06:31.698522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:38.382 [2024-07-23 06:06:31.698724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.382 [2024-07-23 06:06:31.698747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:38.382 passed 00:10:38.382 Test: blockdev nvme admin passthru ...passed 00:10:38.640 Test: blockdev copy ...passed 00:10:38.640 00:10:38.640 Run Summary: Type Total Ran Passed Failed Inactive 00:10:38.640 suites 1 1 n/a 0 0 00:10:38.640 tests 23 23 23 0 0 00:10:38.640 asserts 152 152 152 0 n/a 00:10:38.640 00:10:38.640 Elapsed time = 1.256 seconds 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:38.640 06:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:38.640 rmmod nvme_tcp 00:10:38.898 rmmod nvme_fabrics 00:10:38.898 rmmod nvme_keyring 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1664509 ']' 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1664509 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1664509 ']' 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1664509 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1664509 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1664509' 00:10:38.898 killing process with pid 1664509 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1664509 00:10:38.898 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1664509 00:10:39.157 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.157 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:39.157 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:39.157 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.157 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.157 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.157 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.157 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.059 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:41.059 00:10:41.059 real 0m6.352s 00:10:41.059 user 0m10.326s 00:10:41.059 sys 0m2.152s 00:10:41.059 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.059 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.059 ************************************ 00:10:41.059 END TEST nvmf_bdevio 00:10:41.059 ************************************ 00:10:41.059 06:06:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:41.059 06:06:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:41.059 00:10:41.059 real 3m50.889s 00:10:41.059 user 9m59.377s 00:10:41.059 sys 1m7.663s 00:10:41.059 06:06:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.059 06:06:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.059 ************************************ 00:10:41.059 END TEST nvmf_target_core 00:10:41.059 ************************************ 00:10:41.059 06:06:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:41.059 06:06:34 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:41.059 06:06:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:41.059 06:06:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.059 06:06:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:41.318 ************************************ 00:10:41.318 START TEST nvmf_target_extra 00:10:41.318 ************************************ 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:41.318 * Looking for test storage... 00:10:41.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.318 ************************************ 00:10:41.318 START TEST nvmf_example 00:10:41.318 ************************************ 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:41.318 * Looking for test storage... 00:10:41.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.318 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:41.319 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:43.849 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:43.849 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:43.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:43.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.849 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:43.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:10:43.850 00:10:43.850 --- 10.0.0.2 ping statistics --- 00:10:43.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.850 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:43.850 00:10:43.850 --- 10.0.0.1 ping statistics --- 00:10:43.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.850 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1666779 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1666779 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1666779 ']' 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.850 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.850 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:44.782 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:44.782 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.766 Initializing NVMe Controllers 00:10:54.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:54.766 Initialization complete. Launching workers. 00:10:54.766 ======================================================== 00:10:54.766 Latency(us) 00:10:54.766 Device Information : IOPS MiB/s Average min max 00:10:54.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14910.13 58.24 4292.25 874.91 15670.66 00:10:54.766 ======================================================== 00:10:54.766 Total : 14910.13 58.24 4292.25 874.91 15670.66 00:10:54.766 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.766 rmmod nvme_tcp 00:10:54.766 rmmod nvme_fabrics 00:10:54.766 rmmod nvme_keyring 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1666779 ']' 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1666779 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1666779 ']' 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1666779 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.766 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1666779 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1666779' 00:10:55.026 killing process with pid 1666779 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 1666779 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 1666779 00:10:55.026 nvmf threads initialize successfully 00:10:55.026 bdev subsystem init successfully 00:10:55.026 created a nvmf target service 00:10:55.026 create targets's poll groups done 00:10:55.026 all subsystems of target started 00:10:55.026 nvmf target is running 00:10:55.026 all subsystems of target stopped 00:10:55.026 destroy targets's poll groups done 00:10:55.026 destroyed the nvmf target service 00:10:55.026 bdev subsystem finish successfully 00:10:55.026 nvmf threads destroy successfully 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.026 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.563 00:10:57.563 real 0m15.908s 00:10:57.563 user 0m45.010s 00:10:57.563 sys 0m3.296s 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.563 ************************************ 00:10:57.563 END TEST nvmf_example 00:10:57.563 ************************************ 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.563 ************************************ 00:10:57.563 START TEST nvmf_filesystem 00:10:57.563 ************************************ 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:57.563 * Looking for test storage... 00:10:57.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:57.563 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:57.564 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:57.564 #define SPDK_CONFIG_H 00:10:57.564 #define SPDK_CONFIG_APPS 1 00:10:57.564 #define SPDK_CONFIG_ARCH native 00:10:57.564 #undef SPDK_CONFIG_ASAN 00:10:57.564 #undef SPDK_CONFIG_AVAHI 00:10:57.564 #undef SPDK_CONFIG_CET 00:10:57.565 #define SPDK_CONFIG_COVERAGE 1 00:10:57.565 #define SPDK_CONFIG_CROSS_PREFIX 00:10:57.565 #undef SPDK_CONFIG_CRYPTO 00:10:57.565 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:57.565 #undef SPDK_CONFIG_CUSTOMOCF 00:10:57.565 #undef SPDK_CONFIG_DAOS 00:10:57.565 #define SPDK_CONFIG_DAOS_DIR 00:10:57.565 #define SPDK_CONFIG_DEBUG 1 00:10:57.565 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:57.565 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:57.565 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:57.565 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:57.565 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:57.565 #undef SPDK_CONFIG_DPDK_UADK 00:10:57.565 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:57.565 #define SPDK_CONFIG_EXAMPLES 1 00:10:57.565 #undef SPDK_CONFIG_FC 00:10:57.565 #define SPDK_CONFIG_FC_PATH 00:10:57.565 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:57.565 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:57.565 #undef SPDK_CONFIG_FUSE 00:10:57.565 #undef SPDK_CONFIG_FUZZER 00:10:57.565 #define SPDK_CONFIG_FUZZER_LIB 00:10:57.565 #undef SPDK_CONFIG_GOLANG 00:10:57.565 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:57.565 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:57.565 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:57.565 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:57.565 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:57.565 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:57.565 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:57.565 #define SPDK_CONFIG_IDXD 1 00:10:57.565 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:57.565 #undef SPDK_CONFIG_IPSEC_MB 00:10:57.565 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:57.565 #define SPDK_CONFIG_ISAL 1 00:10:57.565 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:57.565 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:57.565 #define SPDK_CONFIG_LIBDIR 00:10:57.565 #undef SPDK_CONFIG_LTO 00:10:57.565 #define SPDK_CONFIG_MAX_LCORES 128 00:10:57.565 #define SPDK_CONFIG_NVME_CUSE 1 00:10:57.565 #undef SPDK_CONFIG_OCF 00:10:57.565 #define SPDK_CONFIG_OCF_PATH 00:10:57.565 #define SPDK_CONFIG_OPENSSL_PATH 00:10:57.565 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:57.565 #define SPDK_CONFIG_PGO_DIR 00:10:57.565 #undef SPDK_CONFIG_PGO_USE 00:10:57.565 #define SPDK_CONFIG_PREFIX /usr/local 00:10:57.565 #undef SPDK_CONFIG_RAID5F 00:10:57.565 #undef SPDK_CONFIG_RBD 00:10:57.565 #define SPDK_CONFIG_RDMA 1 00:10:57.565 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:57.565 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:57.565 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:57.565 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:57.565 #define SPDK_CONFIG_SHARED 1 00:10:57.565 #undef SPDK_CONFIG_SMA 00:10:57.565 #define SPDK_CONFIG_TESTS 1 00:10:57.565 #undef SPDK_CONFIG_TSAN 00:10:57.565 #define SPDK_CONFIG_UBLK 1 00:10:57.565 #define SPDK_CONFIG_UBSAN 1 00:10:57.565 #undef SPDK_CONFIG_UNIT_TESTS 00:10:57.565 #undef SPDK_CONFIG_URING 00:10:57.565 #define SPDK_CONFIG_URING_PATH 00:10:57.565 #undef SPDK_CONFIG_URING_ZNS 00:10:57.565 #undef SPDK_CONFIG_USDT 00:10:57.565 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:57.565 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:57.565 #define SPDK_CONFIG_VFIO_USER 1 00:10:57.565 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:57.565 #define SPDK_CONFIG_VHOST 1 00:10:57.565 #define SPDK_CONFIG_VIRTIO 1 00:10:57.565 #undef SPDK_CONFIG_VTUNE 00:10:57.565 #define SPDK_CONFIG_VTUNE_DIR 00:10:57.565 #define SPDK_CONFIG_WERROR 1 00:10:57.565 #define SPDK_CONFIG_WPDK_DIR 00:10:57.565 #undef SPDK_CONFIG_XNVME 00:10:57.565 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:57.565 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:57.566 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1668480 ]] 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1668480 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.FC2v9X 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:10:57.567 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FC2v9X/tests/target /tmp/spdk.FC2v9X 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=54018785280 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7975923712 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30935171072 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=62181376 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12376535040 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22409216 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996287488 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1069056 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:10:57.568 * Looking for test storage... 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=54018785280 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10190516224 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:57.568 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.569 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:59.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:59.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:59.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:59.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:59.472 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.473 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.473 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.473 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:59.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:10:59.731 00:10:59.731 --- 10.0.0.2 ping statistics --- 00:10:59.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.731 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:10:59.731 00:10:59.731 --- 10.0.0.1 ping statistics --- 00:10:59.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.731 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.731 ************************************ 00:10:59.731 START TEST nvmf_filesystem_no_in_capsule 00:10:59.731 ************************************ 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1670104 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1670104 00:10:59.731 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1670104 ']' 00:10:59.732 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.732 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.732 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.732 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.732 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.732 [2024-07-23 06:06:52.980391] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:59.732 [2024-07-23 06:06:52.980486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.732 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.732 [2024-07-23 06:06:53.019848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:59.732 [2024-07-23 06:06:53.046057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.990 [2024-07-23 06:06:53.137558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.990 [2024-07-23 06:06:53.137638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.990 [2024-07-23 06:06:53.137653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.990 [2024-07-23 06:06:53.137679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.990 [2024-07-23 06:06:53.137696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.990 [2024-07-23 06:06:53.137746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.990 [2024-07-23 06:06:53.137807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.990 [2024-07-23 06:06:53.137858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.990 [2024-07-23 06:06:53.137861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.990 [2024-07-23 06:06:53.291913] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.990 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.249 Malloc1 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.249 [2024-07-23 06:06:53.475655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:00.249 { 00:11:00.249 "name": "Malloc1", 00:11:00.249 "aliases": [ 00:11:00.249 "ffda585b-58c3-4d87-b274-88a9b0a9e4b4" 00:11:00.249 ], 00:11:00.249 "product_name": "Malloc disk", 00:11:00.249 "block_size": 512, 00:11:00.249 "num_blocks": 1048576, 00:11:00.249 "uuid": "ffda585b-58c3-4d87-b274-88a9b0a9e4b4", 00:11:00.249 "assigned_rate_limits": { 00:11:00.249 "rw_ios_per_sec": 0, 00:11:00.249 "rw_mbytes_per_sec": 0, 00:11:00.249 "r_mbytes_per_sec": 0, 00:11:00.249 "w_mbytes_per_sec": 0 00:11:00.249 }, 00:11:00.249 "claimed": true, 00:11:00.249 "claim_type": "exclusive_write", 00:11:00.249 "zoned": false, 00:11:00.249 "supported_io_types": { 00:11:00.249 "read": true, 00:11:00.249 "write": true, 00:11:00.249 "unmap": true, 00:11:00.249 "flush": true, 00:11:00.249 "reset": true, 00:11:00.249 "nvme_admin": false, 00:11:00.249 "nvme_io": false, 00:11:00.249 "nvme_io_md": false, 00:11:00.249 "write_zeroes": true, 00:11:00.249 "zcopy": true, 00:11:00.249 "get_zone_info": false, 00:11:00.249 "zone_management": false, 00:11:00.249 "zone_append": false, 00:11:00.249 "compare": false, 00:11:00.249 "compare_and_write": false, 00:11:00.249 "abort": true, 00:11:00.249 "seek_hole": false, 00:11:00.249 "seek_data": false, 00:11:00.249 "copy": true, 00:11:00.249 "nvme_iov_md": false 00:11:00.249 }, 00:11:00.249 "memory_domains": [ 00:11:00.249 { 00:11:00.249 "dma_device_id": "system", 00:11:00.249 "dma_device_type": 1 00:11:00.249 }, 00:11:00.249 { 00:11:00.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.249 "dma_device_type": 2 00:11:00.249 } 00:11:00.249 ], 00:11:00.249 "driver_specific": {} 00:11:00.249 } 00:11:00.249 ]' 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:00.249 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.181 06:06:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:01.181 06:06:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:01.181 06:06:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.181 06:06:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:01.181 06:06:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:03.080 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:03.337 06:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:03.902 06:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:04.835 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:04.835 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:04.835 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:04.835 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.835 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.093 ************************************ 00:11:05.093 START TEST filesystem_ext4 00:11:05.093 ************************************ 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:05.093 06:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:05.093 mke2fs 1.46.5 (30-Dec-2021) 00:11:05.093 Discarding device blocks: 0/522240 done 00:11:05.093 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:05.093 Filesystem UUID: 9db06f5c-adce-492e-bbbc-bb112841bedc 00:11:05.093 Superblock backups stored on blocks: 00:11:05.093 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:05.093 00:11:05.093 Allocating group tables: 0/64 done 00:11:05.093 Writing inode tables: 0/64 done 00:11:05.351 Creating journal (8192 blocks): done 00:11:06.173 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:06.173 00:11:06.173 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:06.173 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.739 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.739 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:06.739 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.739 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:06.739 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:06.739 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1670104 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.002 00:11:07.002 real 0m1.928s 00:11:07.002 user 0m0.016s 00:11:07.002 sys 0m0.059s 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:07.002 ************************************ 00:11:07.002 END TEST filesystem_ext4 00:11:07.002 ************************************ 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.002 ************************************ 00:11:07.002 START TEST filesystem_btrfs 00:11:07.002 ************************************ 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:07.002 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:07.260 btrfs-progs v6.6.2 00:11:07.260 See https://btrfs.readthedocs.io for more information. 00:11:07.260 00:11:07.260 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:07.260 NOTE: several default settings have changed in version 5.15, please make sure 00:11:07.260 this does not affect your deployments: 00:11:07.260 - DUP for metadata (-m dup) 00:11:07.260 - enabled no-holes (-O no-holes) 00:11:07.260 - enabled free-space-tree (-R free-space-tree) 00:11:07.260 00:11:07.260 Label: (null) 00:11:07.260 UUID: 06a4f86b-dd0e-4adf-a239-b3fe7e7c1a6c 00:11:07.260 Node size: 16384 00:11:07.260 Sector size: 4096 00:11:07.260 Filesystem size: 510.00MiB 00:11:07.260 Block group profiles: 00:11:07.260 Data: single 8.00MiB 00:11:07.260 Metadata: DUP 32.00MiB 00:11:07.260 System: DUP 8.00MiB 00:11:07.260 SSD detected: yes 00:11:07.260 Zoned device: no 00:11:07.260 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:07.260 Runtime features: free-space-tree 00:11:07.260 Checksum: crc32c 00:11:07.260 Number of devices: 1 00:11:07.260 Devices: 00:11:07.260 ID SIZE PATH 00:11:07.260 1 510.00MiB /dev/nvme0n1p1 00:11:07.260 00:11:07.260 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:07.260 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1670104 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.825 00:11:07.825 real 0m0.966s 00:11:07.825 user 0m0.013s 00:11:07.825 sys 0m0.119s 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.825 ************************************ 00:11:07.825 END TEST filesystem_btrfs 00:11:07.825 ************************************ 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.825 ************************************ 00:11:07.825 START TEST filesystem_xfs 00:11:07.825 ************************************ 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:07.825 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:08.083 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:08.083 = sectsz=512 attr=2, projid32bit=1 00:11:08.084 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:08.084 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:08.084 data = bsize=4096 blocks=130560, imaxpct=25 00:11:08.084 = sunit=0 swidth=0 blks 00:11:08.084 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:08.084 log =internal log bsize=4096 blocks=16384, version=2 00:11:08.084 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:08.084 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:09.016 Discarding blocks...Done. 00:11:09.016 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:09.016 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1670104 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.541 00:11:11.541 real 0m3.375s 00:11:11.541 user 0m0.027s 00:11:11.541 sys 0m0.054s 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:11.541 ************************************ 00:11:11.541 END TEST filesystem_xfs 00:11:11.541 ************************************ 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1670104 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1670104 ']' 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1670104 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1670104 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1670104' 00:11:11.541 killing process with pid 1670104 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1670104 00:11:11.541 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1670104 00:11:12.107 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:12.108 00:11:12.108 real 0m12.275s 00:11:12.108 user 0m47.009s 00:11:12.108 sys 0m1.907s 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.108 ************************************ 00:11:12.108 END TEST nvmf_filesystem_no_in_capsule 00:11:12.108 ************************************ 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.108 ************************************ 00:11:12.108 START TEST nvmf_filesystem_in_capsule 00:11:12.108 ************************************ 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1671778 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1671778 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1671778 ']' 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.108 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.108 [2024-07-23 06:07:05.312671] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:11:12.108 [2024-07-23 06:07:05.312754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.108 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.108 [2024-07-23 06:07:05.353225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:12.108 [2024-07-23 06:07:05.382671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.366 [2024-07-23 06:07:05.474997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.366 [2024-07-23 06:07:05.475062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.366 [2024-07-23 06:07:05.475079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.366 [2024-07-23 06:07:05.475092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.366 [2024-07-23 06:07:05.475104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.366 [2024-07-23 06:07:05.475204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.366 [2024-07-23 06:07:05.475276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.366 [2024-07-23 06:07:05.475368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.366 [2024-07-23 06:07:05.475371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 [2024-07-23 06:07:05.631174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.366 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.625 Malloc1 00:11:12.625 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.625 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.625 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.625 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.625 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.625 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.625 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.626 [2024-07-23 06:07:05.818628] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:12.626 { 00:11:12.626 "name": "Malloc1", 00:11:12.626 "aliases": [ 00:11:12.626 "3b15fbe9-9fdf-4242-b789-73e953db4a9f" 00:11:12.626 ], 00:11:12.626 "product_name": "Malloc disk", 00:11:12.626 "block_size": 512, 00:11:12.626 "num_blocks": 1048576, 00:11:12.626 "uuid": "3b15fbe9-9fdf-4242-b789-73e953db4a9f", 00:11:12.626 "assigned_rate_limits": { 00:11:12.626 "rw_ios_per_sec": 0, 00:11:12.626 "rw_mbytes_per_sec": 0, 00:11:12.626 "r_mbytes_per_sec": 0, 00:11:12.626 "w_mbytes_per_sec": 0 00:11:12.626 }, 00:11:12.626 "claimed": true, 00:11:12.626 "claim_type": "exclusive_write", 00:11:12.626 "zoned": false, 00:11:12.626 "supported_io_types": { 00:11:12.626 "read": true, 00:11:12.626 "write": true, 00:11:12.626 "unmap": true, 00:11:12.626 "flush": true, 00:11:12.626 "reset": true, 00:11:12.626 "nvme_admin": false, 00:11:12.626 "nvme_io": false, 00:11:12.626 "nvme_io_md": false, 00:11:12.626 "write_zeroes": true, 00:11:12.626 "zcopy": true, 00:11:12.626 "get_zone_info": false, 00:11:12.626 "zone_management": false, 00:11:12.626 "zone_append": false, 00:11:12.626 "compare": false, 00:11:12.626 "compare_and_write": false, 00:11:12.626 "abort": true, 00:11:12.626 "seek_hole": false, 00:11:12.626 "seek_data": false, 00:11:12.626 "copy": true, 00:11:12.626 "nvme_iov_md": false 00:11:12.626 }, 00:11:12.626 "memory_domains": [ 00:11:12.626 { 00:11:12.626 "dma_device_id": "system", 00:11:12.626 "dma_device_type": 1 00:11:12.626 }, 00:11:12.626 { 00:11:12.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.626 "dma_device_type": 2 00:11:12.626 } 00:11:12.626 ], 00:11:12.626 "driver_specific": {} 00:11:12.626 } 00:11:12.626 ]' 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:12.626 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.192 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.192 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.192 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.192 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:13.192 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:15.718 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:16.661 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:16.661 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:16.661 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:16.661 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.661 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.919 ************************************ 00:11:16.919 START TEST filesystem_in_capsule_ext4 00:11:16.919 ************************************ 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:16.919 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:16.919 mke2fs 1.46.5 (30-Dec-2021) 00:11:16.919 Discarding device blocks: 0/522240 done 00:11:16.919 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:16.919 Filesystem UUID: b0620473-dd05-48d4-9082-58add1c71cac 00:11:16.919 Superblock backups stored on blocks: 00:11:16.919 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:16.919 00:11:16.919 Allocating group tables: 0/64 done 00:11:16.919 Writing inode tables: 0/64 done 00:11:17.177 Creating journal (8192 blocks): done 00:11:17.177 Writing superblocks and filesystem accounting information: 0/64 done 00:11:17.177 00:11:17.177 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:17.177 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1671778 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.114 00:11:18.114 real 0m1.384s 00:11:18.114 user 0m0.012s 00:11:18.114 sys 0m0.058s 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:18.114 ************************************ 00:11:18.114 END TEST filesystem_in_capsule_ext4 00:11:18.114 ************************************ 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.114 ************************************ 00:11:18.114 START TEST filesystem_in_capsule_btrfs 00:11:18.114 ************************************ 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:18.114 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:18.115 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:18.373 btrfs-progs v6.6.2 00:11:18.373 See https://btrfs.readthedocs.io for more information. 00:11:18.373 00:11:18.373 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:18.373 NOTE: several default settings have changed in version 5.15, please make sure 00:11:18.373 this does not affect your deployments: 00:11:18.373 - DUP for metadata (-m dup) 00:11:18.373 - enabled no-holes (-O no-holes) 00:11:18.373 - enabled free-space-tree (-R free-space-tree) 00:11:18.373 00:11:18.373 Label: (null) 00:11:18.373 UUID: a19fddc3-1b1a-46b9-85c8-121c6e9730e5 00:11:18.373 Node size: 16384 00:11:18.373 Sector size: 4096 00:11:18.373 Filesystem size: 510.00MiB 00:11:18.373 Block group profiles: 00:11:18.373 Data: single 8.00MiB 00:11:18.373 Metadata: DUP 32.00MiB 00:11:18.373 System: DUP 8.00MiB 00:11:18.373 SSD detected: yes 00:11:18.373 Zoned device: no 00:11:18.373 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:18.373 Runtime features: free-space-tree 00:11:18.373 Checksum: crc32c 00:11:18.373 Number of devices: 1 00:11:18.373 Devices: 00:11:18.373 ID SIZE PATH 00:11:18.373 1 510.00MiB /dev/nvme0n1p1 00:11:18.373 00:11:18.373 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:18.373 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1671778 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.632 00:11:18.632 real 0m0.447s 00:11:18.632 user 0m0.019s 00:11:18.632 sys 0m0.102s 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:18.632 ************************************ 00:11:18.632 END TEST filesystem_in_capsule_btrfs 00:11:18.632 ************************************ 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.632 ************************************ 00:11:18.632 START TEST filesystem_in_capsule_xfs 00:11:18.632 ************************************ 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:18.632 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:18.891 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:18.891 = sectsz=512 attr=2, projid32bit=1 00:11:18.891 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:18.891 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:18.891 data = bsize=4096 blocks=130560, imaxpct=25 00:11:18.891 = sunit=0 swidth=0 blks 00:11:18.891 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:18.891 log =internal log bsize=4096 blocks=16384, version=2 00:11:18.891 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:18.891 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:19.827 Discarding blocks...Done. 00:11:19.827 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:19.827 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:22.379 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:22.379 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:22.379 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:22.379 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:22.379 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:22.379 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:22.379 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1671778 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.380 00:11:22.380 real 0m3.334s 00:11:22.380 user 0m0.020s 00:11:22.380 sys 0m0.060s 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:22.380 ************************************ 00:11:22.380 END TEST filesystem_in_capsule_xfs 00:11:22.380 ************************************ 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1671778 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1671778 ']' 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1671778 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1671778 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1671778' 00:11:22.380 killing process with pid 1671778 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1671778 00:11:22.380 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1671778 00:11:22.949 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:22.949 00:11:22.949 real 0m10.870s 00:11:22.949 user 0m41.504s 00:11:22.949 sys 0m1.726s 00:11:22.949 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.950 ************************************ 00:11:22.950 END TEST nvmf_filesystem_in_capsule 00:11:22.950 ************************************ 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.950 rmmod nvme_tcp 00:11:22.950 rmmod nvme_fabrics 00:11:22.950 rmmod nvme_keyring 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.950 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:25.489 00:11:25.489 real 0m27.806s 00:11:25.489 user 1m29.419s 00:11:25.489 sys 0m5.362s 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.489 ************************************ 00:11:25.489 END TEST nvmf_filesystem 00:11:25.489 ************************************ 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.489 ************************************ 00:11:25.489 START TEST nvmf_target_discovery 00:11:25.489 ************************************ 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:25.489 * Looking for test storage... 00:11:25.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.489 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:25.490 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:27.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:27.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:27.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:27.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:27.398 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:11:27.399 00:11:27.399 --- 10.0.0.2 ping statistics --- 00:11:27.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.399 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:11:27.399 00:11:27.399 --- 10.0.0.1 ping statistics --- 00:11:27.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.399 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1675117 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1675117 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1675117 ']' 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.399 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.399 [2024-07-23 06:07:20.503147] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:11:27.399 [2024-07-23 06:07:20.503232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.399 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.399 [2024-07-23 06:07:20.546978] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:27.399 [2024-07-23 06:07:20.577429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.399 [2024-07-23 06:07:20.671218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.399 [2024-07-23 06:07:20.671277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.399 [2024-07-23 06:07:20.671293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.399 [2024-07-23 06:07:20.671306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.399 [2024-07-23 06:07:20.671317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.399 [2024-07-23 06:07:20.671407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.399 [2024-07-23 06:07:20.671464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.399 [2024-07-23 06:07:20.671528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.399 [2024-07-23 06:07:20.671531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 [2024-07-23 06:07:21.461363] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 Null1 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 [2024-07-23 06:07:21.501672] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 Null2 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 Null3 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.336 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.337 Null4 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.337 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:28.598 00:11:28.598 Discovery Log Number of Records 6, Generation counter 6 00:11:28.598 =====Discovery Log Entry 0====== 00:11:28.598 trtype: tcp 00:11:28.598 adrfam: ipv4 00:11:28.598 subtype: current discovery subsystem 00:11:28.598 treq: not required 00:11:28.598 portid: 0 00:11:28.598 trsvcid: 4420 00:11:28.598 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.598 traddr: 10.0.0.2 00:11:28.598 eflags: explicit discovery connections, duplicate discovery information 00:11:28.598 sectype: none 00:11:28.598 =====Discovery Log Entry 1====== 00:11:28.598 trtype: tcp 00:11:28.598 adrfam: ipv4 00:11:28.598 subtype: nvme subsystem 00:11:28.598 treq: not required 00:11:28.598 portid: 0 00:11:28.598 trsvcid: 4420 00:11:28.598 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:28.598 traddr: 10.0.0.2 00:11:28.598 eflags: none 00:11:28.598 sectype: none 00:11:28.598 =====Discovery Log Entry 2====== 00:11:28.598 trtype: tcp 00:11:28.598 adrfam: ipv4 00:11:28.598 subtype: nvme subsystem 00:11:28.598 treq: not required 00:11:28.598 portid: 0 00:11:28.598 trsvcid: 4420 00:11:28.598 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:28.598 traddr: 10.0.0.2 00:11:28.598 eflags: none 00:11:28.598 sectype: none 00:11:28.598 =====Discovery Log Entry 3====== 00:11:28.598 trtype: tcp 00:11:28.598 adrfam: ipv4 00:11:28.598 subtype: nvme subsystem 00:11:28.598 treq: not required 00:11:28.598 portid: 0 00:11:28.598 trsvcid: 4420 00:11:28.598 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:28.598 traddr: 10.0.0.2 00:11:28.598 eflags: none 00:11:28.598 sectype: none 00:11:28.598 =====Discovery Log Entry 4====== 00:11:28.598 trtype: tcp 00:11:28.598 adrfam: ipv4 00:11:28.598 subtype: nvme subsystem 00:11:28.598 treq: not required 00:11:28.598 portid: 0 00:11:28.598 trsvcid: 4420 00:11:28.598 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:28.598 traddr: 10.0.0.2 00:11:28.598 eflags: none 00:11:28.598 sectype: none 00:11:28.598 =====Discovery Log Entry 5====== 00:11:28.598 trtype: tcp 00:11:28.598 adrfam: ipv4 00:11:28.598 subtype: discovery subsystem referral 00:11:28.598 treq: not required 00:11:28.598 portid: 0 00:11:28.598 trsvcid: 4430 00:11:28.598 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.598 traddr: 10.0.0.2 00:11:28.598 eflags: none 00:11:28.598 sectype: none 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:28.598 Perform nvmf subsystem discovery via RPC 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.598 [ 00:11:28.598 { 00:11:28.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:28.598 "subtype": "Discovery", 00:11:28.598 "listen_addresses": [ 00:11:28.598 { 00:11:28.598 "trtype": "TCP", 00:11:28.598 "adrfam": "IPv4", 00:11:28.598 "traddr": "10.0.0.2", 00:11:28.598 "trsvcid": "4420" 00:11:28.598 } 00:11:28.598 ], 00:11:28.598 "allow_any_host": true, 00:11:28.598 "hosts": [] 00:11:28.598 }, 00:11:28.598 { 00:11:28.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.598 "subtype": "NVMe", 00:11:28.598 "listen_addresses": [ 00:11:28.598 { 00:11:28.598 "trtype": "TCP", 00:11:28.598 "adrfam": "IPv4", 00:11:28.598 "traddr": "10.0.0.2", 00:11:28.598 "trsvcid": "4420" 00:11:28.598 } 00:11:28.598 ], 00:11:28.598 "allow_any_host": true, 00:11:28.598 "hosts": [], 00:11:28.598 "serial_number": "SPDK00000000000001", 00:11:28.598 "model_number": "SPDK bdev Controller", 00:11:28.598 "max_namespaces": 32, 00:11:28.598 "min_cntlid": 1, 00:11:28.598 "max_cntlid": 65519, 00:11:28.598 "namespaces": [ 00:11:28.598 { 00:11:28.598 "nsid": 1, 00:11:28.598 "bdev_name": "Null1", 00:11:28.598 "name": "Null1", 00:11:28.598 "nguid": "F0D8C0B0A3004F568FB7EA362E8C0F22", 00:11:28.598 "uuid": "f0d8c0b0-a300-4f56-8fb7-ea362e8c0f22" 00:11:28.598 } 00:11:28.598 ] 00:11:28.598 }, 00:11:28.598 { 00:11:28.598 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:28.598 "subtype": "NVMe", 00:11:28.598 "listen_addresses": [ 00:11:28.598 { 00:11:28.598 "trtype": "TCP", 00:11:28.598 "adrfam": "IPv4", 00:11:28.598 "traddr": "10.0.0.2", 00:11:28.598 "trsvcid": "4420" 00:11:28.598 } 00:11:28.598 ], 00:11:28.598 "allow_any_host": true, 00:11:28.598 "hosts": [], 00:11:28.598 "serial_number": "SPDK00000000000002", 00:11:28.598 "model_number": "SPDK bdev Controller", 00:11:28.598 "max_namespaces": 32, 00:11:28.598 "min_cntlid": 1, 00:11:28.598 "max_cntlid": 65519, 00:11:28.598 "namespaces": [ 00:11:28.598 { 00:11:28.598 "nsid": 1, 00:11:28.598 "bdev_name": "Null2", 00:11:28.598 "name": "Null2", 00:11:28.598 "nguid": "A954FEC7E04F45A798E82A539CFE4C12", 00:11:28.598 "uuid": "a954fec7-e04f-45a7-98e8-2a539cfe4c12" 00:11:28.598 } 00:11:28.598 ] 00:11:28.598 }, 00:11:28.598 { 00:11:28.598 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:28.598 "subtype": "NVMe", 00:11:28.598 "listen_addresses": [ 00:11:28.598 { 00:11:28.598 "trtype": "TCP", 00:11:28.598 "adrfam": "IPv4", 00:11:28.598 "traddr": "10.0.0.2", 00:11:28.598 "trsvcid": "4420" 00:11:28.598 } 00:11:28.598 ], 00:11:28.598 "allow_any_host": true, 00:11:28.598 "hosts": [], 00:11:28.598 "serial_number": "SPDK00000000000003", 00:11:28.598 "model_number": "SPDK bdev Controller", 00:11:28.598 "max_namespaces": 32, 00:11:28.598 "min_cntlid": 1, 00:11:28.598 "max_cntlid": 65519, 00:11:28.598 "namespaces": [ 00:11:28.598 { 00:11:28.598 "nsid": 1, 00:11:28.598 "bdev_name": "Null3", 00:11:28.598 "name": "Null3", 00:11:28.598 "nguid": "04446803BAB64D60851B5192835A399F", 00:11:28.598 "uuid": "04446803-bab6-4d60-851b-5192835a399f" 00:11:28.598 } 00:11:28.598 ] 00:11:28.598 }, 00:11:28.598 { 00:11:28.598 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:28.598 "subtype": "NVMe", 00:11:28.598 "listen_addresses": [ 00:11:28.598 { 00:11:28.598 "trtype": "TCP", 00:11:28.598 "adrfam": "IPv4", 00:11:28.598 "traddr": "10.0.0.2", 00:11:28.598 "trsvcid": "4420" 00:11:28.598 } 00:11:28.598 ], 00:11:28.598 "allow_any_host": true, 00:11:28.598 "hosts": [], 00:11:28.598 "serial_number": "SPDK00000000000004", 00:11:28.598 "model_number": "SPDK bdev Controller", 00:11:28.598 "max_namespaces": 32, 00:11:28.598 "min_cntlid": 1, 00:11:28.598 "max_cntlid": 65519, 00:11:28.598 "namespaces": [ 00:11:28.598 { 00:11:28.598 "nsid": 1, 00:11:28.598 "bdev_name": "Null4", 00:11:28.598 "name": "Null4", 00:11:28.598 "nguid": "42758867A73449698C421691C575D5A9", 00:11:28.598 "uuid": "42758867-a734-4969-8c42-1691c575d5a9" 00:11:28.598 } 00:11:28.598 ] 00:11:28.598 } 00:11:28.598 ] 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.598 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.599 rmmod nvme_tcp 00:11:28.599 rmmod nvme_fabrics 00:11:28.599 rmmod nvme_keyring 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1675117 ']' 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1675117 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1675117 ']' 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1675117 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1675117 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1675117' 00:11:28.599 killing process with pid 1675117 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1675117 00:11:28.599 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1675117 00:11:28.858 06:07:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.858 06:07:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:28.858 06:07:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:28.858 06:07:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.858 06:07:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:28.858 06:07:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.858 06:07:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.858 06:07:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.405 00:11:31.405 real 0m5.873s 00:11:31.405 user 0m6.810s 00:11:31.405 sys 0m1.778s 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.405 ************************************ 00:11:31.405 END TEST nvmf_target_discovery 00:11:31.405 ************************************ 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.405 ************************************ 00:11:31.405 START TEST nvmf_referrals 00:11:31.405 ************************************ 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:31.405 * Looking for test storage... 00:11:31.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.405 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:33.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:33.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:33.310 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:33.310 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.310 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:11:33.310 00:11:33.310 --- 10.0.0.2 ping statistics --- 00:11:33.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.311 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:33.311 00:11:33.311 --- 10.0.0.1 ping statistics --- 00:11:33.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.311 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1677291 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1677291 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1677291 ']' 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.311 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.311 [2024-07-23 06:07:26.502662] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:11:33.311 [2024-07-23 06:07:26.502757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.311 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.311 [2024-07-23 06:07:26.541250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:33.311 [2024-07-23 06:07:26.568593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.569 [2024-07-23 06:07:26.653084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.569 [2024-07-23 06:07:26.653150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.569 [2024-07-23 06:07:26.653173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.569 [2024-07-23 06:07:26.653184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.569 [2024-07-23 06:07:26.653194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.569 [2024-07-23 06:07:26.653275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.569 [2024-07-23 06:07:26.653339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.569 [2024-07-23 06:07:26.653365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.569 [2024-07-23 06:07:26.653367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.569 [2024-07-23 06:07:26.806160] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.569 [2024-07-23 06:07:26.818381] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:33.569 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:33.570 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.570 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:33.570 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.570 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.570 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:33.570 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.829 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:33.829 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:33.829 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:33.829 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.829 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.829 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.829 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.829 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.829 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.088 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:34.348 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:34.348 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:34.348 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:34.348 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:34.348 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.348 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:34.608 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.609 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:34.868 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:34.868 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:34.868 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:34.868 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:34.868 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.868 06:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.868 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:35.127 rmmod nvme_tcp 00:11:35.127 rmmod nvme_fabrics 00:11:35.127 rmmod nvme_keyring 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1677291 ']' 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1677291 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1677291 ']' 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1677291 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1677291 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1677291' 00:11:35.127 killing process with pid 1677291 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1677291 00:11:35.127 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1677291 00:11:35.385 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.385 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:35.385 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:35.385 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.385 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:35.385 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.385 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.385 06:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.926 00:11:37.926 real 0m6.430s 00:11:37.926 user 0m9.235s 00:11:37.926 sys 0m2.106s 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.926 ************************************ 00:11:37.926 END TEST nvmf_referrals 00:11:37.926 ************************************ 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.926 ************************************ 00:11:37.926 START TEST nvmf_connect_disconnect 00:11:37.926 ************************************ 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:37.926 * Looking for test storage... 00:11:37.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.926 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.927 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.832 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:39.833 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:39.833 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:39.833 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:39.833 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:39.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:11:39.833 00:11:39.833 --- 10.0.0.2 ping statistics --- 00:11:39.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.833 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:11:39.833 00:11:39.833 --- 10.0.0.1 ping statistics --- 00:11:39.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.833 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:39.833 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1679500 00:11:39.834 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.834 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1679500 00:11:39.834 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1679500 ']' 00:11:39.834 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.834 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.834 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.834 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.834 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:39.834 [2024-07-23 06:07:32.941768] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:11:39.834 [2024-07-23 06:07:32.941850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.834 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.834 [2024-07-23 06:07:32.981141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:39.834 [2024-07-23 06:07:33.013850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.834 [2024-07-23 06:07:33.110243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.834 [2024-07-23 06:07:33.110303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.834 [2024-07-23 06:07:33.110329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.834 [2024-07-23 06:07:33.110342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.834 [2024-07-23 06:07:33.110354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.834 [2024-07-23 06:07:33.110444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.834 [2024-07-23 06:07:33.110508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.834 [2024-07-23 06:07:33.110560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.834 [2024-07-23 06:07:33.110562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.100 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.100 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:11:40.100 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:40.100 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:40.100 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:40.100 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.100 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:40.101 [2024-07-23 06:07:33.266118] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.101 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:40.102 [2024-07-23 06:07:33.323283] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:40.102 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:42.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.705 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:30.705 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:30.705 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.705 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.965 rmmod nvme_tcp 00:15:30.965 rmmod nvme_fabrics 00:15:30.965 rmmod nvme_keyring 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1679500 ']' 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1679500 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1679500 ']' 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1679500 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1679500 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1679500' 00:15:30.965 killing process with pid 1679500 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1679500 00:15:30.965 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1679500 00:15:31.226 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.226 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.226 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.226 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.226 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.226 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.226 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.226 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.169 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:33.169 00:15:33.169 real 3m55.683s 00:15:33.169 user 14m56.954s 00:15:33.169 sys 0m35.193s 00:15:33.169 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.169 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:33.169 ************************************ 00:15:33.169 END TEST nvmf_connect_disconnect 00:15:33.169 ************************************ 00:15:33.169 06:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:33.169 06:11:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:33.169 06:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:33.169 06:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.169 06:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.169 ************************************ 00:15:33.169 START TEST nvmf_multitarget 00:15:33.170 ************************************ 00:15:33.170 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:33.170 * Looking for test storage... 00:15:33.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.170 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:33.429 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.430 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:35.335 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:35.335 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.335 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:35.336 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:35.336 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.336 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:15:35.595 00:15:35.595 --- 10.0.0.2 ping statistics --- 00:15:35.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.595 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:15:35.595 00:15:35.595 --- 10.0.0.1 ping statistics --- 00:15:35.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.595 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1710479 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.595 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1710479 00:15:35.596 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1710479 ']' 00:15:35.596 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.596 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.596 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.596 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.596 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:35.596 [2024-07-23 06:11:28.837860] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:35.596 [2024-07-23 06:11:28.837947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.596 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.596 [2024-07-23 06:11:28.876911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:35.596 [2024-07-23 06:11:28.906179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.854 [2024-07-23 06:11:28.996654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.854 [2024-07-23 06:11:28.996733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.854 [2024-07-23 06:11:28.996747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.854 [2024-07-23 06:11:28.996759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.854 [2024-07-23 06:11:28.996769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.854 [2024-07-23 06:11:28.996863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.854 [2024-07-23 06:11:28.996933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.854 [2024-07-23 06:11:28.997064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.854 [2024-07-23 06:11:28.997066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:35.854 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:36.113 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:36.113 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:36.113 "nvmf_tgt_1" 00:15:36.113 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:36.371 "nvmf_tgt_2" 00:15:36.371 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:36.371 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:36.371 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:36.371 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:36.630 true 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:36.630 true 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.630 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.630 rmmod nvme_tcp 00:15:36.888 rmmod nvme_fabrics 00:15:36.888 rmmod nvme_keyring 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1710479 ']' 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1710479 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1710479 ']' 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1710479 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1710479 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1710479' 00:15:36.888 killing process with pid 1710479 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1710479 00:15:36.888 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1710479 00:15:37.148 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:37.148 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:37.148 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:37.148 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.148 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:37.148 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.148 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.149 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:39.057 00:15:39.057 real 0m5.833s 00:15:39.057 user 0m6.464s 00:15:39.057 sys 0m2.020s 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.057 ************************************ 00:15:39.057 END TEST nvmf_multitarget 00:15:39.057 ************************************ 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.057 ************************************ 00:15:39.057 START TEST nvmf_rpc 00:15:39.057 ************************************ 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:39.057 * Looking for test storage... 00:15:39.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.057 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.315 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:39.316 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:41.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:41.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:41.218 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:41.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:41.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:41.219 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:41.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:15:41.480 00:15:41.480 --- 10.0.0.2 ping statistics --- 00:15:41.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.480 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:41.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:15:41.480 00:15:41.480 --- 10.0.0.1 ping statistics --- 00:15:41.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.480 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1712687 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1712687 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1712687 ']' 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.480 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.480 [2024-07-23 06:11:34.691580] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:41.480 [2024-07-23 06:11:34.691711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.480 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.480 [2024-07-23 06:11:34.735570] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:41.480 [2024-07-23 06:11:34.763399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:41.739 [2024-07-23 06:11:34.856068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.739 [2024-07-23 06:11:34.856123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.739 [2024-07-23 06:11:34.856137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.739 [2024-07-23 06:11:34.856149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.739 [2024-07-23 06:11:34.856158] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.739 [2024-07-23 06:11:34.856206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.739 [2024-07-23 06:11:34.856266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.739 [2024-07-23 06:11:34.856333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:41.739 [2024-07-23 06:11:34.856335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.739 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.739 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:41.739 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.739 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.739 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:41.739 "tick_rate": 2700000000, 00:15:41.739 "poll_groups": [ 00:15:41.739 { 00:15:41.739 "name": "nvmf_tgt_poll_group_000", 00:15:41.739 "admin_qpairs": 0, 00:15:41.739 "io_qpairs": 0, 00:15:41.739 "current_admin_qpairs": 0, 00:15:41.739 "current_io_qpairs": 0, 00:15:41.739 "pending_bdev_io": 0, 00:15:41.739 "completed_nvme_io": 0, 00:15:41.739 "transports": [] 00:15:41.739 }, 00:15:41.739 { 00:15:41.739 "name": "nvmf_tgt_poll_group_001", 00:15:41.739 "admin_qpairs": 0, 00:15:41.739 "io_qpairs": 0, 00:15:41.739 "current_admin_qpairs": 0, 00:15:41.739 "current_io_qpairs": 0, 00:15:41.739 "pending_bdev_io": 0, 00:15:41.739 "completed_nvme_io": 0, 00:15:41.739 "transports": [] 00:15:41.739 }, 00:15:41.739 { 00:15:41.739 "name": "nvmf_tgt_poll_group_002", 00:15:41.739 "admin_qpairs": 0, 00:15:41.739 "io_qpairs": 0, 00:15:41.739 "current_admin_qpairs": 0, 00:15:41.739 "current_io_qpairs": 0, 00:15:41.739 "pending_bdev_io": 0, 00:15:41.739 "completed_nvme_io": 0, 00:15:41.739 "transports": [] 00:15:41.739 }, 00:15:41.739 { 00:15:41.739 "name": "nvmf_tgt_poll_group_003", 00:15:41.739 "admin_qpairs": 0, 00:15:41.739 "io_qpairs": 0, 00:15:41.739 "current_admin_qpairs": 0, 00:15:41.739 "current_io_qpairs": 0, 00:15:41.739 "pending_bdev_io": 0, 00:15:41.739 "completed_nvme_io": 0, 00:15:41.739 "transports": [] 00:15:41.739 } 00:15:41.739 ] 00:15:41.739 }' 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:41.739 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.998 [2024-07-23 06:11:35.105472] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:41.998 "tick_rate": 2700000000, 00:15:41.998 "poll_groups": [ 00:15:41.998 { 00:15:41.998 "name": "nvmf_tgt_poll_group_000", 00:15:41.998 "admin_qpairs": 0, 00:15:41.998 "io_qpairs": 0, 00:15:41.998 "current_admin_qpairs": 0, 00:15:41.998 "current_io_qpairs": 0, 00:15:41.998 "pending_bdev_io": 0, 00:15:41.998 "completed_nvme_io": 0, 00:15:41.998 "transports": [ 00:15:41.998 { 00:15:41.998 "trtype": "TCP" 00:15:41.998 } 00:15:41.998 ] 00:15:41.998 }, 00:15:41.998 { 00:15:41.998 "name": "nvmf_tgt_poll_group_001", 00:15:41.998 "admin_qpairs": 0, 00:15:41.998 "io_qpairs": 0, 00:15:41.998 "current_admin_qpairs": 0, 00:15:41.998 "current_io_qpairs": 0, 00:15:41.998 "pending_bdev_io": 0, 00:15:41.998 "completed_nvme_io": 0, 00:15:41.998 "transports": [ 00:15:41.998 { 00:15:41.998 "trtype": "TCP" 00:15:41.998 } 00:15:41.998 ] 00:15:41.998 }, 00:15:41.998 { 00:15:41.998 "name": "nvmf_tgt_poll_group_002", 00:15:41.998 "admin_qpairs": 0, 00:15:41.998 "io_qpairs": 0, 00:15:41.998 "current_admin_qpairs": 0, 00:15:41.998 "current_io_qpairs": 0, 00:15:41.998 "pending_bdev_io": 0, 00:15:41.998 "completed_nvme_io": 0, 00:15:41.998 "transports": [ 00:15:41.998 { 00:15:41.998 "trtype": "TCP" 00:15:41.998 } 00:15:41.998 ] 00:15:41.998 }, 00:15:41.998 { 00:15:41.998 "name": "nvmf_tgt_poll_group_003", 00:15:41.998 "admin_qpairs": 0, 00:15:41.998 "io_qpairs": 0, 00:15:41.998 "current_admin_qpairs": 0, 00:15:41.998 "current_io_qpairs": 0, 00:15:41.998 "pending_bdev_io": 0, 00:15:41.998 "completed_nvme_io": 0, 00:15:41.998 "transports": [ 00:15:41.998 { 00:15:41.998 "trtype": "TCP" 00:15:41.998 } 00:15:41.998 ] 00:15:41.998 } 00:15:41.998 ] 00:15:41.998 }' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.998 Malloc1 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.998 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.999 [2024-07-23 06:11:35.270876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:41.999 [2024-07-23 06:11:35.293389] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:41.999 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:41.999 could not add new controller: failed to write to nvme-fabrics device 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.999 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:42.937 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:42.937 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:42.937 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.937 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:42.937 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:44.838 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:44.838 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:44.838 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.838 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:44.838 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.838 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:44.838 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:44.838 [2024-07-23 06:11:38.053989] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:44.838 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:44.838 could not add new controller: failed to write to nvme-fabrics device 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.838 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:45.405 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:45.405 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:45.405 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.405 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:45.405 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.941 [2024-07-23 06:11:40.834050] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.941 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:48.200 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:48.200 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:48.200 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.200 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:48.200 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:50.116 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:50.116 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:50.116 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.375 [2024-07-23 06:11:43.603872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.375 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:51.311 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:51.311 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:51.311 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.311 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:51.311 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:53.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.218 [2024-07-23 06:11:46.419022] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.218 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.219 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.785 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:53.785 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:53.785 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.785 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:53.785 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:56.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 [2024-07-23 06:11:49.150253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.324 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:56.585 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:56.585 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:56.585 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.585 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:56.585 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:58.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:58.490 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.749 [2024-07-23 06:11:51.878096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.749 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.319 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.319 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.319 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.320 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:59.320 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.854 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 [2024-07-23 06:11:54.712694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 [2024-07-23 06:11:54.760753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 [2024-07-23 06:11:54.808910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 [2024-07-23 06:11:54.857083] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.855 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.856 [2024-07-23 06:11:54.905239] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:01.856 "tick_rate": 2700000000, 00:16:01.856 "poll_groups": [ 00:16:01.856 { 00:16:01.856 "name": "nvmf_tgt_poll_group_000", 00:16:01.856 "admin_qpairs": 2, 00:16:01.856 "io_qpairs": 84, 00:16:01.856 "current_admin_qpairs": 0, 00:16:01.856 "current_io_qpairs": 0, 00:16:01.856 "pending_bdev_io": 0, 00:16:01.856 "completed_nvme_io": 134, 00:16:01.856 "transports": [ 00:16:01.856 { 00:16:01.856 "trtype": "TCP" 00:16:01.856 } 00:16:01.856 ] 00:16:01.856 }, 00:16:01.856 { 00:16:01.856 "name": "nvmf_tgt_poll_group_001", 00:16:01.856 "admin_qpairs": 2, 00:16:01.856 "io_qpairs": 84, 00:16:01.856 "current_admin_qpairs": 0, 00:16:01.856 "current_io_qpairs": 0, 00:16:01.856 "pending_bdev_io": 0, 00:16:01.856 "completed_nvme_io": 133, 00:16:01.856 "transports": [ 00:16:01.856 { 00:16:01.856 "trtype": "TCP" 00:16:01.856 } 00:16:01.856 ] 00:16:01.856 }, 00:16:01.856 { 00:16:01.856 "name": "nvmf_tgt_poll_group_002", 00:16:01.856 "admin_qpairs": 1, 00:16:01.856 "io_qpairs": 84, 00:16:01.856 "current_admin_qpairs": 0, 00:16:01.856 "current_io_qpairs": 0, 00:16:01.856 "pending_bdev_io": 0, 00:16:01.856 "completed_nvme_io": 151, 00:16:01.856 "transports": [ 00:16:01.856 { 00:16:01.856 "trtype": "TCP" 00:16:01.856 } 00:16:01.856 ] 00:16:01.856 }, 00:16:01.856 { 00:16:01.856 "name": "nvmf_tgt_poll_group_003", 00:16:01.856 "admin_qpairs": 2, 00:16:01.856 "io_qpairs": 84, 00:16:01.856 "current_admin_qpairs": 0, 00:16:01.856 "current_io_qpairs": 0, 00:16:01.856 "pending_bdev_io": 0, 00:16:01.856 "completed_nvme_io": 268, 00:16:01.856 "transports": [ 00:16:01.856 { 00:16:01.856 "trtype": "TCP" 00:16:01.856 } 00:16:01.856 ] 00:16:01.856 } 00:16:01.856 ] 00:16:01.856 }' 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:01.856 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.856 rmmod nvme_tcp 00:16:01.856 rmmod nvme_fabrics 00:16:01.856 rmmod nvme_keyring 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1712687 ']' 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1712687 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1712687 ']' 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1712687 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1712687 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1712687' 00:16:01.856 killing process with pid 1712687 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1712687 00:16:01.856 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1712687 00:16:02.116 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.116 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.116 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.116 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.116 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.116 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.116 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.116 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.656 00:16:04.656 real 0m25.057s 00:16:04.656 user 1m21.104s 00:16:04.656 sys 0m4.120s 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 ************************************ 00:16:04.656 END TEST nvmf_rpc 00:16:04.656 ************************************ 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 ************************************ 00:16:04.656 START TEST nvmf_invalid 00:16:04.656 ************************************ 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:04.656 * Looking for test storage... 00:16:04.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.656 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.586 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:06.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:06.587 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:06.587 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:06.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:16:06.587 00:16:06.587 --- 10.0.0.2 ping statistics --- 00:16:06.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.587 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:16:06.587 00:16:06.587 --- 10.0.0.1 ping statistics --- 00:16:06.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.587 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1717050 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1717050 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1717050 ']' 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.587 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:06.588 [2024-07-23 06:11:59.638566] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:16:06.588 [2024-07-23 06:11:59.638680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.588 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.588 [2024-07-23 06:11:59.678143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:06.588 [2024-07-23 06:11:59.704893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.588 [2024-07-23 06:11:59.791503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.588 [2024-07-23 06:11:59.791557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.588 [2024-07-23 06:11:59.791580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.588 [2024-07-23 06:11:59.791606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.588 [2024-07-23 06:11:59.791622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.588 [2024-07-23 06:11:59.791675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.588 [2024-07-23 06:11:59.791732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.588 [2024-07-23 06:11:59.791800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.588 [2024-07-23 06:11:59.791802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.588 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.588 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:16:06.588 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.588 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.588 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:06.846 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.846 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:06.846 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28711 00:16:07.139 [2024-07-23 06:12:00.196919] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:07.139 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:07.139 { 00:16:07.139 "nqn": "nqn.2016-06.io.spdk:cnode28711", 00:16:07.139 "tgt_name": "foobar", 00:16:07.139 "method": "nvmf_create_subsystem", 00:16:07.139 "req_id": 1 00:16:07.139 } 00:16:07.139 Got JSON-RPC error response 00:16:07.139 response: 00:16:07.139 { 00:16:07.139 "code": -32603, 00:16:07.139 "message": "Unable to find target foobar" 00:16:07.139 }' 00:16:07.139 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:07.139 { 00:16:07.139 "nqn": "nqn.2016-06.io.spdk:cnode28711", 00:16:07.139 "tgt_name": "foobar", 00:16:07.139 "method": "nvmf_create_subsystem", 00:16:07.139 "req_id": 1 00:16:07.139 } 00:16:07.139 Got JSON-RPC error response 00:16:07.139 response: 00:16:07.139 { 00:16:07.139 "code": -32603, 00:16:07.139 "message": "Unable to find target foobar" 00:16:07.139 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:07.139 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:07.139 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17904 00:16:07.398 [2024-07-23 06:12:00.461825] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17904: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:07.398 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:07.398 { 00:16:07.398 "nqn": "nqn.2016-06.io.spdk:cnode17904", 00:16:07.398 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:07.398 "method": "nvmf_create_subsystem", 00:16:07.398 "req_id": 1 00:16:07.398 } 00:16:07.398 Got JSON-RPC error response 00:16:07.398 response: 00:16:07.398 { 00:16:07.398 "code": -32602, 00:16:07.398 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:07.398 }' 00:16:07.398 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:07.398 { 00:16:07.398 "nqn": "nqn.2016-06.io.spdk:cnode17904", 00:16:07.398 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:07.398 "method": "nvmf_create_subsystem", 00:16:07.398 "req_id": 1 00:16:07.398 } 00:16:07.398 Got JSON-RPC error response 00:16:07.398 response: 00:16:07.398 { 00:16:07.398 "code": -32602, 00:16:07.398 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:07.398 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:07.398 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:07.398 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7821 00:16:07.398 [2024-07-23 06:12:00.722680] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7821: invalid model number 'SPDK_Controller' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:07.657 { 00:16:07.657 "nqn": "nqn.2016-06.io.spdk:cnode7821", 00:16:07.657 "model_number": "SPDK_Controller\u001f", 00:16:07.657 "method": "nvmf_create_subsystem", 00:16:07.657 "req_id": 1 00:16:07.657 } 00:16:07.657 Got JSON-RPC error response 00:16:07.657 response: 00:16:07.657 { 00:16:07.657 "code": -32602, 00:16:07.657 "message": "Invalid MN SPDK_Controller\u001f" 00:16:07.657 }' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:07.657 { 00:16:07.657 "nqn": "nqn.2016-06.io.spdk:cnode7821", 00:16:07.657 "model_number": "SPDK_Controller\u001f", 00:16:07.657 "method": "nvmf_create_subsystem", 00:16:07.657 "req_id": 1 00:16:07.657 } 00:16:07.657 Got JSON-RPC error response 00:16:07.657 response: 00:16:07.657 { 00:16:07.657 "code": -32602, 00:16:07.657 "message": "Invalid MN SPDK_Controller\u001f" 00:16:07.657 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.657 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'O(;m@GL}5'\''~\wGIFTMFY' 00:16:07.658 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'O(;m@GL}5'\''~\wGIFTMFY' nqn.2016-06.io.spdk:cnode18377 00:16:07.917 [2024-07-23 06:12:01.043768] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18377: invalid serial number 'O(;m@GL}5'~\wGIFTMFY' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:07.917 { 00:16:07.917 "nqn": "nqn.2016-06.io.spdk:cnode18377", 00:16:07.917 "serial_number": "\u007fO(;m@GL}5'\''~\\wGIFTMFY", 00:16:07.917 "method": "nvmf_create_subsystem", 00:16:07.917 "req_id": 1 00:16:07.917 } 00:16:07.917 Got JSON-RPC error response 00:16:07.917 response: 00:16:07.917 { 00:16:07.917 "code": -32602, 00:16:07.917 "message": "Invalid SN \u007fO(;m@GL}5'\''~\\wGIFTMFY" 00:16:07.917 }' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:07.917 { 00:16:07.917 "nqn": "nqn.2016-06.io.spdk:cnode18377", 00:16:07.917 "serial_number": "\u007fO(;m@GL}5'~\\wGIFTMFY", 00:16:07.917 "method": "nvmf_create_subsystem", 00:16:07.917 "req_id": 1 00:16:07.917 } 00:16:07.917 Got JSON-RPC error response 00:16:07.917 response: 00:16:07.917 { 00:16:07.917 "code": -32602, 00:16:07.917 "message": "Invalid SN \u007fO(;m@GL}5'~\\wGIFTMFY" 00:16:07.917 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:07.917 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.918 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ',A?V/~{6Cl9Wo\A|R\/*EJfSuoN=gZF'\''7G5kYzv3g' 00:16:07.919 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ',A?V/~{6Cl9Wo\A|R\/*EJfSuoN=gZF'\''7G5kYzv3g' nqn.2016-06.io.spdk:cnode19412 00:16:08.177 [2024-07-23 06:12:01.412994] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19412: invalid model number ',A?V/~{6Cl9Wo\A|R\/*EJfSuoN=gZF'7G5kYzv3g' 00:16:08.177 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:08.177 { 00:16:08.177 "nqn": "nqn.2016-06.io.spdk:cnode19412", 00:16:08.177 "model_number": ",A?V/~{6Cl9Wo\\A|R\\/*EJfSuoN=gZF'\''7G5kYzv3g", 00:16:08.177 "method": "nvmf_create_subsystem", 00:16:08.177 "req_id": 1 00:16:08.177 } 00:16:08.177 Got JSON-RPC error response 00:16:08.177 response: 00:16:08.177 { 00:16:08.177 "code": -32602, 00:16:08.177 "message": "Invalid MN ,A?V/~{6Cl9Wo\\A|R\\/*EJfSuoN=gZF'\''7G5kYzv3g" 00:16:08.177 }' 00:16:08.177 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:08.177 { 00:16:08.177 "nqn": "nqn.2016-06.io.spdk:cnode19412", 00:16:08.177 "model_number": ",A?V/~{6Cl9Wo\\A|R\\/*EJfSuoN=gZF'7G5kYzv3g", 00:16:08.177 "method": "nvmf_create_subsystem", 00:16:08.177 "req_id": 1 00:16:08.177 } 00:16:08.177 Got JSON-RPC error response 00:16:08.177 response: 00:16:08.177 { 00:16:08.177 "code": -32602, 00:16:08.177 "message": "Invalid MN ,A?V/~{6Cl9Wo\\A|R\\/*EJfSuoN=gZF'7G5kYzv3g" 00:16:08.177 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:08.177 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:08.435 [2024-07-23 06:12:01.665882] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.435 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:08.693 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:08.693 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:08.693 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:08.693 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:08.693 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:08.951 [2024-07-23 06:12:02.195641] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:08.951 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:08.951 { 00:16:08.951 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:08.951 "listen_address": { 00:16:08.951 "trtype": "tcp", 00:16:08.951 "traddr": "", 00:16:08.951 "trsvcid": "4421" 00:16:08.951 }, 00:16:08.951 "method": "nvmf_subsystem_remove_listener", 00:16:08.951 "req_id": 1 00:16:08.951 } 00:16:08.951 Got JSON-RPC error response 00:16:08.951 response: 00:16:08.951 { 00:16:08.951 "code": -32602, 00:16:08.951 "message": "Invalid parameters" 00:16:08.951 }' 00:16:08.951 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:08.951 { 00:16:08.951 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:08.951 "listen_address": { 00:16:08.951 "trtype": "tcp", 00:16:08.951 "traddr": "", 00:16:08.951 "trsvcid": "4421" 00:16:08.951 }, 00:16:08.951 "method": "nvmf_subsystem_remove_listener", 00:16:08.951 "req_id": 1 00:16:08.951 } 00:16:08.951 Got JSON-RPC error response 00:16:08.951 response: 00:16:08.951 { 00:16:08.951 "code": -32602, 00:16:08.951 "message": "Invalid parameters" 00:16:08.951 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:08.951 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6142 -i 0 00:16:09.209 [2024-07-23 06:12:02.444418] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6142: invalid cntlid range [0-65519] 00:16:09.209 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:09.209 { 00:16:09.209 "nqn": "nqn.2016-06.io.spdk:cnode6142", 00:16:09.209 "min_cntlid": 0, 00:16:09.209 "method": "nvmf_create_subsystem", 00:16:09.209 "req_id": 1 00:16:09.209 } 00:16:09.209 Got JSON-RPC error response 00:16:09.209 response: 00:16:09.209 { 00:16:09.209 "code": -32602, 00:16:09.209 "message": "Invalid cntlid range [0-65519]" 00:16:09.209 }' 00:16:09.209 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:09.209 { 00:16:09.209 "nqn": "nqn.2016-06.io.spdk:cnode6142", 00:16:09.209 "min_cntlid": 0, 00:16:09.209 "method": "nvmf_create_subsystem", 00:16:09.209 "req_id": 1 00:16:09.209 } 00:16:09.209 Got JSON-RPC error response 00:16:09.209 response: 00:16:09.209 { 00:16:09.209 "code": -32602, 00:16:09.209 "message": "Invalid cntlid range [0-65519]" 00:16:09.209 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:09.209 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6013 -i 65520 00:16:09.468 [2024-07-23 06:12:02.701271] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6013: invalid cntlid range [65520-65519] 00:16:09.468 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:09.468 { 00:16:09.468 "nqn": "nqn.2016-06.io.spdk:cnode6013", 00:16:09.468 "min_cntlid": 65520, 00:16:09.468 "method": "nvmf_create_subsystem", 00:16:09.468 "req_id": 1 00:16:09.468 } 00:16:09.468 Got JSON-RPC error response 00:16:09.468 response: 00:16:09.468 { 00:16:09.468 "code": -32602, 00:16:09.468 "message": "Invalid cntlid range [65520-65519]" 00:16:09.468 }' 00:16:09.468 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:09.468 { 00:16:09.468 "nqn": "nqn.2016-06.io.spdk:cnode6013", 00:16:09.468 "min_cntlid": 65520, 00:16:09.468 "method": "nvmf_create_subsystem", 00:16:09.468 "req_id": 1 00:16:09.468 } 00:16:09.468 Got JSON-RPC error response 00:16:09.468 response: 00:16:09.468 { 00:16:09.468 "code": -32602, 00:16:09.468 "message": "Invalid cntlid range [65520-65519]" 00:16:09.468 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:09.468 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6416 -I 0 00:16:09.725 [2024-07-23 06:12:02.962173] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6416: invalid cntlid range [1-0] 00:16:09.725 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:09.725 { 00:16:09.725 "nqn": "nqn.2016-06.io.spdk:cnode6416", 00:16:09.725 "max_cntlid": 0, 00:16:09.725 "method": "nvmf_create_subsystem", 00:16:09.725 "req_id": 1 00:16:09.725 } 00:16:09.725 Got JSON-RPC error response 00:16:09.725 response: 00:16:09.725 { 00:16:09.725 "code": -32602, 00:16:09.725 "message": "Invalid cntlid range [1-0]" 00:16:09.725 }' 00:16:09.725 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:09.725 { 00:16:09.725 "nqn": "nqn.2016-06.io.spdk:cnode6416", 00:16:09.725 "max_cntlid": 0, 00:16:09.725 "method": "nvmf_create_subsystem", 00:16:09.725 "req_id": 1 00:16:09.725 } 00:16:09.725 Got JSON-RPC error response 00:16:09.725 response: 00:16:09.725 { 00:16:09.725 "code": -32602, 00:16:09.725 "message": "Invalid cntlid range [1-0]" 00:16:09.725 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:09.725 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3668 -I 65520 00:16:09.983 [2024-07-23 06:12:03.218974] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3668: invalid cntlid range [1-65520] 00:16:09.983 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:09.983 { 00:16:09.983 "nqn": "nqn.2016-06.io.spdk:cnode3668", 00:16:09.983 "max_cntlid": 65520, 00:16:09.983 "method": "nvmf_create_subsystem", 00:16:09.983 "req_id": 1 00:16:09.983 } 00:16:09.983 Got JSON-RPC error response 00:16:09.983 response: 00:16:09.983 { 00:16:09.983 "code": -32602, 00:16:09.983 "message": "Invalid cntlid range [1-65520]" 00:16:09.983 }' 00:16:09.983 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:09.983 { 00:16:09.983 "nqn": "nqn.2016-06.io.spdk:cnode3668", 00:16:09.983 "max_cntlid": 65520, 00:16:09.983 "method": "nvmf_create_subsystem", 00:16:09.983 "req_id": 1 00:16:09.983 } 00:16:09.983 Got JSON-RPC error response 00:16:09.983 response: 00:16:09.983 { 00:16:09.983 "code": -32602, 00:16:09.983 "message": "Invalid cntlid range [1-65520]" 00:16:09.983 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:09.983 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20508 -i 6 -I 5 00:16:10.240 [2024-07-23 06:12:03.467838] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20508: invalid cntlid range [6-5] 00:16:10.240 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:10.240 { 00:16:10.240 "nqn": "nqn.2016-06.io.spdk:cnode20508", 00:16:10.240 "min_cntlid": 6, 00:16:10.240 "max_cntlid": 5, 00:16:10.240 "method": "nvmf_create_subsystem", 00:16:10.240 "req_id": 1 00:16:10.240 } 00:16:10.240 Got JSON-RPC error response 00:16:10.240 response: 00:16:10.240 { 00:16:10.240 "code": -32602, 00:16:10.240 "message": "Invalid cntlid range [6-5]" 00:16:10.240 }' 00:16:10.240 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:10.240 { 00:16:10.240 "nqn": "nqn.2016-06.io.spdk:cnode20508", 00:16:10.240 "min_cntlid": 6, 00:16:10.240 "max_cntlid": 5, 00:16:10.240 "method": "nvmf_create_subsystem", 00:16:10.240 "req_id": 1 00:16:10.240 } 00:16:10.240 Got JSON-RPC error response 00:16:10.240 response: 00:16:10.240 { 00:16:10.240 "code": -32602, 00:16:10.240 "message": "Invalid cntlid range [6-5]" 00:16:10.240 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:10.240 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:10.500 { 00:16:10.500 "name": "foobar", 00:16:10.500 "method": "nvmf_delete_target", 00:16:10.500 "req_id": 1 00:16:10.500 } 00:16:10.500 Got JSON-RPC error response 00:16:10.500 response: 00:16:10.500 { 00:16:10.500 "code": -32602, 00:16:10.500 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:10.500 }' 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:10.500 { 00:16:10.500 "name": "foobar", 00:16:10.500 "method": "nvmf_delete_target", 00:16:10.500 "req_id": 1 00:16:10.500 } 00:16:10.500 Got JSON-RPC error response 00:16:10.500 response: 00:16:10.500 { 00:16:10.500 "code": -32602, 00:16:10.500 "message": "The specified target doesn't exist, cannot delete it." 00:16:10.500 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:10.500 rmmod nvme_tcp 00:16:10.500 rmmod nvme_fabrics 00:16:10.500 rmmod nvme_keyring 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1717050 ']' 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1717050 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1717050 ']' 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1717050 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1717050 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:10.500 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:10.501 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1717050' 00:16:10.501 killing process with pid 1717050 00:16:10.501 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1717050 00:16:10.501 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1717050 00:16:10.761 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.761 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.761 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.761 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.761 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.761 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.761 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.761 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.669 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:12.669 00:16:12.669 real 0m8.520s 00:16:12.669 user 0m20.078s 00:16:12.669 sys 0m2.339s 00:16:12.669 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.669 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:12.669 ************************************ 00:16:12.669 END TEST nvmf_invalid 00:16:12.669 ************************************ 00:16:12.669 06:12:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:12.669 06:12:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:12.669 06:12:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:12.669 06:12:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.669 06:12:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:12.669 ************************************ 00:16:12.669 START TEST nvmf_connect_stress 00:16:12.669 ************************************ 00:16:12.669 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:12.929 * Looking for test storage... 00:16:12.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.929 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:14.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:14.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.836 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:14.837 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:14.837 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.837 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:14.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:16:14.837 00:16:14.837 --- 10.0.0.2 ping statistics --- 00:16:14.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.837 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:16:14.837 00:16:14.837 --- 10.0.0.1 ping statistics --- 00:16:14.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.837 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1720149 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1720149 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1720149 ']' 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.837 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.837 [2024-07-23 06:12:08.172595] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:16:14.837 [2024-07-23 06:12:08.172696] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.096 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.096 [2024-07-23 06:12:08.211810] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:15.096 [2024-07-23 06:12:08.243843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.096 [2024-07-23 06:12:08.338569] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.096 [2024-07-23 06:12:08.338651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.096 [2024-07-23 06:12:08.338672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.096 [2024-07-23 06:12:08.338685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.096 [2024-07-23 06:12:08.338697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.096 [2024-07-23 06:12:08.338753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.096 [2024-07-23 06:12:08.338810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.096 [2024-07-23 06:12:08.338812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.355 [2024-07-23 06:12:08.481467] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.355 [2024-07-23 06:12:08.506737] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.355 NULL1 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1720221 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.355 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.356 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.614 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.614 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:15.614 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.614 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.614 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.872 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.872 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:15.872 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.872 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.872 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.440 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.440 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:16.440 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.440 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.440 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.698 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.698 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:16.698 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.698 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.698 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.956 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.956 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:16.956 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.956 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.956 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.214 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.214 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:17.214 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.214 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.214 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.473 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.473 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:17.473 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.473 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.473 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.043 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.043 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:18.043 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.043 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.043 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.303 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.303 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:18.303 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.303 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.303 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.562 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.562 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:18.562 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.562 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.562 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.820 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.820 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:18.820 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.820 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.820 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.079 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.079 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:19.079 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.079 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.079 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.649 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.649 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:19.649 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.649 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.649 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.908 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.908 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:19.908 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.908 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.908 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.166 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.166 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:20.166 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.166 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.166 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.424 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.424 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:20.424 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.424 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.424 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.704 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.704 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:20.704 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.704 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.704 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.277 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.277 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:21.277 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.277 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.277 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.537 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.537 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:21.537 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.537 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.537 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.796 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.796 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:21.796 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.796 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.796 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.054 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.054 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:22.054 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.054 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.054 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.314 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.314 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:22.314 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.314 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.314 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.884 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.884 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:22.884 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.884 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.884 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.145 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.145 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:23.145 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.145 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.145 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.404 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.404 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:23.404 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.404 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.404 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.662 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.663 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:23.663 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.663 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.663 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.922 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.922 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:23.922 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.922 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.922 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.493 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.493 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:24.493 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.493 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.493 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.752 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.752 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:24.752 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.752 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.752 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.010 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.010 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:25.010 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.010 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.011 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.270 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.270 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:25.270 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.270 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.270 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720221 00:16:25.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1720221) - No such process 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1720221 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.530 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.530 rmmod nvme_tcp 00:16:25.530 rmmod nvme_fabrics 00:16:25.789 rmmod nvme_keyring 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1720149 ']' 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1720149 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1720149 ']' 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1720149 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1720149 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1720149' 00:16:25.789 killing process with pid 1720149 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1720149 00:16:25.789 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1720149 00:16:26.050 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.050 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.050 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.050 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.050 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.050 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.050 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.050 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.960 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:27.960 00:16:27.960 real 0m15.190s 00:16:27.960 user 0m37.962s 00:16:27.960 sys 0m6.048s 00:16:27.960 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:27.960 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:27.960 ************************************ 00:16:27.960 END TEST nvmf_connect_stress 00:16:27.960 ************************************ 00:16:27.960 06:12:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:27.960 06:12:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:27.960 06:12:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:27.960 06:12:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.960 06:12:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.960 ************************************ 00:16:27.961 START TEST nvmf_fused_ordering 00:16:27.961 ************************************ 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:27.961 * Looking for test storage... 00:16:27.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:27.961 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:30.151 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:30.151 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.151 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:30.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:30.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:16:30.152 00:16:30.152 --- 10.0.0.2 ping statistics --- 00:16:30.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.152 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:16:30.152 00:16:30.152 --- 10.0.0.1 ping statistics --- 00:16:30.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.152 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1723463 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1723463 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1723463 ']' 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.152 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.152 [2024-07-23 06:12:23.384649] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:16:30.152 [2024-07-23 06:12:23.384734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.152 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.152 [2024-07-23 06:12:23.423465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:30.152 [2024-07-23 06:12:23.455831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.412 [2024-07-23 06:12:23.546341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.412 [2024-07-23 06:12:23.546405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.412 [2024-07-23 06:12:23.546422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.412 [2024-07-23 06:12:23.546435] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.412 [2024-07-23 06:12:23.546447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.412 [2024-07-23 06:12:23.546483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.412 [2024-07-23 06:12:23.693573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.412 [2024-07-23 06:12:23.709812] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.412 NULL1 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.412 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:30.412 [2024-07-23 06:12:23.755210] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:16:30.412 [2024-07-23 06:12:23.755255] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723597 ] 00:16:30.672 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.672 [2024-07-23 06:12:23.788395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:31.244 Attached to nqn.2016-06.io.spdk:cnode1 00:16:31.244 Namespace ID: 1 size: 1GB 00:16:31.244 fused_ordering(0) 00:16:31.244 fused_ordering(1) 00:16:31.244 fused_ordering(2) 00:16:31.244 fused_ordering(3) 00:16:31.244 fused_ordering(4) 00:16:31.244 fused_ordering(5) 00:16:31.244 fused_ordering(6) 00:16:31.244 fused_ordering(7) 00:16:31.244 fused_ordering(8) 00:16:31.244 fused_ordering(9) 00:16:31.244 fused_ordering(10) 00:16:31.244 fused_ordering(11) 00:16:31.244 fused_ordering(12) 00:16:31.244 fused_ordering(13) 00:16:31.244 fused_ordering(14) 00:16:31.244 fused_ordering(15) 00:16:31.244 fused_ordering(16) 00:16:31.244 fused_ordering(17) 00:16:31.244 fused_ordering(18) 00:16:31.244 fused_ordering(19) 00:16:31.244 fused_ordering(20) 00:16:31.244 fused_ordering(21) 00:16:31.244 fused_ordering(22) 00:16:31.244 fused_ordering(23) 00:16:31.244 fused_ordering(24) 00:16:31.244 fused_ordering(25) 00:16:31.244 fused_ordering(26) 00:16:31.244 fused_ordering(27) 00:16:31.244 fused_ordering(28) 00:16:31.244 fused_ordering(29) 00:16:31.244 fused_ordering(30) 00:16:31.244 fused_ordering(31) 00:16:31.244 fused_ordering(32) 00:16:31.244 fused_ordering(33) 00:16:31.244 fused_ordering(34) 00:16:31.244 fused_ordering(35) 00:16:31.244 fused_ordering(36) 00:16:31.244 fused_ordering(37) 00:16:31.244 fused_ordering(38) 00:16:31.244 fused_ordering(39) 00:16:31.244 fused_ordering(40) 00:16:31.244 fused_ordering(41) 00:16:31.244 fused_ordering(42) 00:16:31.244 fused_ordering(43) 00:16:31.244 fused_ordering(44) 00:16:31.244 fused_ordering(45) 00:16:31.244 fused_ordering(46) 00:16:31.244 fused_ordering(47) 00:16:31.244 fused_ordering(48) 00:16:31.244 fused_ordering(49) 00:16:31.244 fused_ordering(50) 00:16:31.244 fused_ordering(51) 00:16:31.244 fused_ordering(52) 00:16:31.244 fused_ordering(53) 00:16:31.244 fused_ordering(54) 00:16:31.244 fused_ordering(55) 00:16:31.244 fused_ordering(56) 00:16:31.244 fused_ordering(57) 00:16:31.244 fused_ordering(58) 00:16:31.244 fused_ordering(59) 00:16:31.244 fused_ordering(60) 00:16:31.244 fused_ordering(61) 00:16:31.244 fused_ordering(62) 00:16:31.244 fused_ordering(63) 00:16:31.244 fused_ordering(64) 00:16:31.244 fused_ordering(65) 00:16:31.244 fused_ordering(66) 00:16:31.244 fused_ordering(67) 00:16:31.244 fused_ordering(68) 00:16:31.244 fused_ordering(69) 00:16:31.244 fused_ordering(70) 00:16:31.244 fused_ordering(71) 00:16:31.244 fused_ordering(72) 00:16:31.244 fused_ordering(73) 00:16:31.244 fused_ordering(74) 00:16:31.244 fused_ordering(75) 00:16:31.244 fused_ordering(76) 00:16:31.244 fused_ordering(77) 00:16:31.244 fused_ordering(78) 00:16:31.244 fused_ordering(79) 00:16:31.244 fused_ordering(80) 00:16:31.244 fused_ordering(81) 00:16:31.244 fused_ordering(82) 00:16:31.244 fused_ordering(83) 00:16:31.244 fused_ordering(84) 00:16:31.244 fused_ordering(85) 00:16:31.244 fused_ordering(86) 00:16:31.244 fused_ordering(87) 00:16:31.244 fused_ordering(88) 00:16:31.244 fused_ordering(89) 00:16:31.244 fused_ordering(90) 00:16:31.244 fused_ordering(91) 00:16:31.244 fused_ordering(92) 00:16:31.244 fused_ordering(93) 00:16:31.244 fused_ordering(94) 00:16:31.244 fused_ordering(95) 00:16:31.244 fused_ordering(96) 00:16:31.244 fused_ordering(97) 00:16:31.244 fused_ordering(98) 00:16:31.245 fused_ordering(99) 00:16:31.245 fused_ordering(100) 00:16:31.245 fused_ordering(101) 00:16:31.245 fused_ordering(102) 00:16:31.245 fused_ordering(103) 00:16:31.245 fused_ordering(104) 00:16:31.245 fused_ordering(105) 00:16:31.245 fused_ordering(106) 00:16:31.245 fused_ordering(107) 00:16:31.245 fused_ordering(108) 00:16:31.245 fused_ordering(109) 00:16:31.245 fused_ordering(110) 00:16:31.245 fused_ordering(111) 00:16:31.245 fused_ordering(112) 00:16:31.245 fused_ordering(113) 00:16:31.245 fused_ordering(114) 00:16:31.245 fused_ordering(115) 00:16:31.245 fused_ordering(116) 00:16:31.245 fused_ordering(117) 00:16:31.245 fused_ordering(118) 00:16:31.245 fused_ordering(119) 00:16:31.245 fused_ordering(120) 00:16:31.245 fused_ordering(121) 00:16:31.245 fused_ordering(122) 00:16:31.245 fused_ordering(123) 00:16:31.245 fused_ordering(124) 00:16:31.245 fused_ordering(125) 00:16:31.245 fused_ordering(126) 00:16:31.245 fused_ordering(127) 00:16:31.245 fused_ordering(128) 00:16:31.245 fused_ordering(129) 00:16:31.245 fused_ordering(130) 00:16:31.245 fused_ordering(131) 00:16:31.245 fused_ordering(132) 00:16:31.245 fused_ordering(133) 00:16:31.245 fused_ordering(134) 00:16:31.245 fused_ordering(135) 00:16:31.245 fused_ordering(136) 00:16:31.245 fused_ordering(137) 00:16:31.245 fused_ordering(138) 00:16:31.245 fused_ordering(139) 00:16:31.245 fused_ordering(140) 00:16:31.245 fused_ordering(141) 00:16:31.245 fused_ordering(142) 00:16:31.245 fused_ordering(143) 00:16:31.245 fused_ordering(144) 00:16:31.245 fused_ordering(145) 00:16:31.245 fused_ordering(146) 00:16:31.245 fused_ordering(147) 00:16:31.245 fused_ordering(148) 00:16:31.245 fused_ordering(149) 00:16:31.245 fused_ordering(150) 00:16:31.245 fused_ordering(151) 00:16:31.245 fused_ordering(152) 00:16:31.245 fused_ordering(153) 00:16:31.245 fused_ordering(154) 00:16:31.245 fused_ordering(155) 00:16:31.245 fused_ordering(156) 00:16:31.245 fused_ordering(157) 00:16:31.245 fused_ordering(158) 00:16:31.245 fused_ordering(159) 00:16:31.245 fused_ordering(160) 00:16:31.245 fused_ordering(161) 00:16:31.245 fused_ordering(162) 00:16:31.245 fused_ordering(163) 00:16:31.245 fused_ordering(164) 00:16:31.245 fused_ordering(165) 00:16:31.245 fused_ordering(166) 00:16:31.245 fused_ordering(167) 00:16:31.245 fused_ordering(168) 00:16:31.245 fused_ordering(169) 00:16:31.245 fused_ordering(170) 00:16:31.245 fused_ordering(171) 00:16:31.245 fused_ordering(172) 00:16:31.245 fused_ordering(173) 00:16:31.245 fused_ordering(174) 00:16:31.245 fused_ordering(175) 00:16:31.245 fused_ordering(176) 00:16:31.245 fused_ordering(177) 00:16:31.245 fused_ordering(178) 00:16:31.245 fused_ordering(179) 00:16:31.245 fused_ordering(180) 00:16:31.245 fused_ordering(181) 00:16:31.245 fused_ordering(182) 00:16:31.245 fused_ordering(183) 00:16:31.245 fused_ordering(184) 00:16:31.245 fused_ordering(185) 00:16:31.245 fused_ordering(186) 00:16:31.245 fused_ordering(187) 00:16:31.245 fused_ordering(188) 00:16:31.245 fused_ordering(189) 00:16:31.245 fused_ordering(190) 00:16:31.245 fused_ordering(191) 00:16:31.245 fused_ordering(192) 00:16:31.245 fused_ordering(193) 00:16:31.245 fused_ordering(194) 00:16:31.245 fused_ordering(195) 00:16:31.245 fused_ordering(196) 00:16:31.245 fused_ordering(197) 00:16:31.245 fused_ordering(198) 00:16:31.245 fused_ordering(199) 00:16:31.245 fused_ordering(200) 00:16:31.245 fused_ordering(201) 00:16:31.245 fused_ordering(202) 00:16:31.245 fused_ordering(203) 00:16:31.245 fused_ordering(204) 00:16:31.245 fused_ordering(205) 00:16:31.816 fused_ordering(206) 00:16:31.816 fused_ordering(207) 00:16:31.816 fused_ordering(208) 00:16:31.816 fused_ordering(209) 00:16:31.816 fused_ordering(210) 00:16:31.816 fused_ordering(211) 00:16:31.816 fused_ordering(212) 00:16:31.816 fused_ordering(213) 00:16:31.816 fused_ordering(214) 00:16:31.816 fused_ordering(215) 00:16:31.816 fused_ordering(216) 00:16:31.816 fused_ordering(217) 00:16:31.816 fused_ordering(218) 00:16:31.816 fused_ordering(219) 00:16:31.816 fused_ordering(220) 00:16:31.816 fused_ordering(221) 00:16:31.816 fused_ordering(222) 00:16:31.816 fused_ordering(223) 00:16:31.816 fused_ordering(224) 00:16:31.816 fused_ordering(225) 00:16:31.816 fused_ordering(226) 00:16:31.816 fused_ordering(227) 00:16:31.816 fused_ordering(228) 00:16:31.816 fused_ordering(229) 00:16:31.816 fused_ordering(230) 00:16:31.816 fused_ordering(231) 00:16:31.816 fused_ordering(232) 00:16:31.816 fused_ordering(233) 00:16:31.816 fused_ordering(234) 00:16:31.816 fused_ordering(235) 00:16:31.816 fused_ordering(236) 00:16:31.816 fused_ordering(237) 00:16:31.816 fused_ordering(238) 00:16:31.816 fused_ordering(239) 00:16:31.816 fused_ordering(240) 00:16:31.816 fused_ordering(241) 00:16:31.816 fused_ordering(242) 00:16:31.816 fused_ordering(243) 00:16:31.816 fused_ordering(244) 00:16:31.816 fused_ordering(245) 00:16:31.816 fused_ordering(246) 00:16:31.816 fused_ordering(247) 00:16:31.816 fused_ordering(248) 00:16:31.816 fused_ordering(249) 00:16:31.816 fused_ordering(250) 00:16:31.816 fused_ordering(251) 00:16:31.816 fused_ordering(252) 00:16:31.816 fused_ordering(253) 00:16:31.816 fused_ordering(254) 00:16:31.816 fused_ordering(255) 00:16:31.816 fused_ordering(256) 00:16:31.816 fused_ordering(257) 00:16:31.816 fused_ordering(258) 00:16:31.816 fused_ordering(259) 00:16:31.816 fused_ordering(260) 00:16:31.816 fused_ordering(261) 00:16:31.816 fused_ordering(262) 00:16:31.816 fused_ordering(263) 00:16:31.816 fused_ordering(264) 00:16:31.816 fused_ordering(265) 00:16:31.816 fused_ordering(266) 00:16:31.816 fused_ordering(267) 00:16:31.816 fused_ordering(268) 00:16:31.816 fused_ordering(269) 00:16:31.816 fused_ordering(270) 00:16:31.816 fused_ordering(271) 00:16:31.816 fused_ordering(272) 00:16:31.816 fused_ordering(273) 00:16:31.816 fused_ordering(274) 00:16:31.816 fused_ordering(275) 00:16:31.816 fused_ordering(276) 00:16:31.816 fused_ordering(277) 00:16:31.816 fused_ordering(278) 00:16:31.816 fused_ordering(279) 00:16:31.816 fused_ordering(280) 00:16:31.816 fused_ordering(281) 00:16:31.816 fused_ordering(282) 00:16:31.816 fused_ordering(283) 00:16:31.816 fused_ordering(284) 00:16:31.816 fused_ordering(285) 00:16:31.816 fused_ordering(286) 00:16:31.816 fused_ordering(287) 00:16:31.816 fused_ordering(288) 00:16:31.816 fused_ordering(289) 00:16:31.816 fused_ordering(290) 00:16:31.816 fused_ordering(291) 00:16:31.816 fused_ordering(292) 00:16:31.816 fused_ordering(293) 00:16:31.816 fused_ordering(294) 00:16:31.816 fused_ordering(295) 00:16:31.816 fused_ordering(296) 00:16:31.816 fused_ordering(297) 00:16:31.816 fused_ordering(298) 00:16:31.816 fused_ordering(299) 00:16:31.816 fused_ordering(300) 00:16:31.816 fused_ordering(301) 00:16:31.816 fused_ordering(302) 00:16:31.816 fused_ordering(303) 00:16:31.816 fused_ordering(304) 00:16:31.816 fused_ordering(305) 00:16:31.816 fused_ordering(306) 00:16:31.816 fused_ordering(307) 00:16:31.816 fused_ordering(308) 00:16:31.816 fused_ordering(309) 00:16:31.816 fused_ordering(310) 00:16:31.816 fused_ordering(311) 00:16:31.816 fused_ordering(312) 00:16:31.816 fused_ordering(313) 00:16:31.816 fused_ordering(314) 00:16:31.816 fused_ordering(315) 00:16:31.816 fused_ordering(316) 00:16:31.816 fused_ordering(317) 00:16:31.816 fused_ordering(318) 00:16:31.816 fused_ordering(319) 00:16:31.816 fused_ordering(320) 00:16:31.816 fused_ordering(321) 00:16:31.816 fused_ordering(322) 00:16:31.816 fused_ordering(323) 00:16:31.816 fused_ordering(324) 00:16:31.816 fused_ordering(325) 00:16:31.816 fused_ordering(326) 00:16:31.816 fused_ordering(327) 00:16:31.816 fused_ordering(328) 00:16:31.816 fused_ordering(329) 00:16:31.816 fused_ordering(330) 00:16:31.816 fused_ordering(331) 00:16:31.816 fused_ordering(332) 00:16:31.816 fused_ordering(333) 00:16:31.816 fused_ordering(334) 00:16:31.816 fused_ordering(335) 00:16:31.816 fused_ordering(336) 00:16:31.816 fused_ordering(337) 00:16:31.816 fused_ordering(338) 00:16:31.816 fused_ordering(339) 00:16:31.816 fused_ordering(340) 00:16:31.816 fused_ordering(341) 00:16:31.816 fused_ordering(342) 00:16:31.816 fused_ordering(343) 00:16:31.816 fused_ordering(344) 00:16:31.816 fused_ordering(345) 00:16:31.816 fused_ordering(346) 00:16:31.816 fused_ordering(347) 00:16:31.816 fused_ordering(348) 00:16:31.816 fused_ordering(349) 00:16:31.816 fused_ordering(350) 00:16:31.816 fused_ordering(351) 00:16:31.816 fused_ordering(352) 00:16:31.816 fused_ordering(353) 00:16:31.816 fused_ordering(354) 00:16:31.816 fused_ordering(355) 00:16:31.816 fused_ordering(356) 00:16:31.816 fused_ordering(357) 00:16:31.816 fused_ordering(358) 00:16:31.816 fused_ordering(359) 00:16:31.816 fused_ordering(360) 00:16:31.816 fused_ordering(361) 00:16:31.816 fused_ordering(362) 00:16:31.816 fused_ordering(363) 00:16:31.816 fused_ordering(364) 00:16:31.816 fused_ordering(365) 00:16:31.816 fused_ordering(366) 00:16:31.816 fused_ordering(367) 00:16:31.816 fused_ordering(368) 00:16:31.816 fused_ordering(369) 00:16:31.816 fused_ordering(370) 00:16:31.816 fused_ordering(371) 00:16:31.816 fused_ordering(372) 00:16:31.816 fused_ordering(373) 00:16:31.816 fused_ordering(374) 00:16:31.816 fused_ordering(375) 00:16:31.816 fused_ordering(376) 00:16:31.816 fused_ordering(377) 00:16:31.816 fused_ordering(378) 00:16:31.816 fused_ordering(379) 00:16:31.816 fused_ordering(380) 00:16:31.816 fused_ordering(381) 00:16:31.816 fused_ordering(382) 00:16:31.816 fused_ordering(383) 00:16:31.816 fused_ordering(384) 00:16:31.816 fused_ordering(385) 00:16:31.816 fused_ordering(386) 00:16:31.816 fused_ordering(387) 00:16:31.816 fused_ordering(388) 00:16:31.816 fused_ordering(389) 00:16:31.816 fused_ordering(390) 00:16:31.816 fused_ordering(391) 00:16:31.816 fused_ordering(392) 00:16:31.816 fused_ordering(393) 00:16:31.816 fused_ordering(394) 00:16:31.816 fused_ordering(395) 00:16:31.816 fused_ordering(396) 00:16:31.816 fused_ordering(397) 00:16:31.816 fused_ordering(398) 00:16:31.816 fused_ordering(399) 00:16:31.816 fused_ordering(400) 00:16:31.816 fused_ordering(401) 00:16:31.816 fused_ordering(402) 00:16:31.816 fused_ordering(403) 00:16:31.816 fused_ordering(404) 00:16:31.816 fused_ordering(405) 00:16:31.816 fused_ordering(406) 00:16:31.816 fused_ordering(407) 00:16:31.816 fused_ordering(408) 00:16:31.816 fused_ordering(409) 00:16:31.816 fused_ordering(410) 00:16:32.386 fused_ordering(411) 00:16:32.386 fused_ordering(412) 00:16:32.386 fused_ordering(413) 00:16:32.386 fused_ordering(414) 00:16:32.386 fused_ordering(415) 00:16:32.386 fused_ordering(416) 00:16:32.386 fused_ordering(417) 00:16:32.386 fused_ordering(418) 00:16:32.386 fused_ordering(419) 00:16:32.386 fused_ordering(420) 00:16:32.386 fused_ordering(421) 00:16:32.386 fused_ordering(422) 00:16:32.386 fused_ordering(423) 00:16:32.386 fused_ordering(424) 00:16:32.386 fused_ordering(425) 00:16:32.386 fused_ordering(426) 00:16:32.386 fused_ordering(427) 00:16:32.386 fused_ordering(428) 00:16:32.386 fused_ordering(429) 00:16:32.386 fused_ordering(430) 00:16:32.386 fused_ordering(431) 00:16:32.386 fused_ordering(432) 00:16:32.386 fused_ordering(433) 00:16:32.386 fused_ordering(434) 00:16:32.386 fused_ordering(435) 00:16:32.386 fused_ordering(436) 00:16:32.386 fused_ordering(437) 00:16:32.386 fused_ordering(438) 00:16:32.386 fused_ordering(439) 00:16:32.386 fused_ordering(440) 00:16:32.386 fused_ordering(441) 00:16:32.386 fused_ordering(442) 00:16:32.386 fused_ordering(443) 00:16:32.386 fused_ordering(444) 00:16:32.386 fused_ordering(445) 00:16:32.386 fused_ordering(446) 00:16:32.386 fused_ordering(447) 00:16:32.386 fused_ordering(448) 00:16:32.386 fused_ordering(449) 00:16:32.386 fused_ordering(450) 00:16:32.386 fused_ordering(451) 00:16:32.386 fused_ordering(452) 00:16:32.386 fused_ordering(453) 00:16:32.386 fused_ordering(454) 00:16:32.386 fused_ordering(455) 00:16:32.386 fused_ordering(456) 00:16:32.386 fused_ordering(457) 00:16:32.386 fused_ordering(458) 00:16:32.386 fused_ordering(459) 00:16:32.386 fused_ordering(460) 00:16:32.386 fused_ordering(461) 00:16:32.386 fused_ordering(462) 00:16:32.386 fused_ordering(463) 00:16:32.386 fused_ordering(464) 00:16:32.386 fused_ordering(465) 00:16:32.386 fused_ordering(466) 00:16:32.386 fused_ordering(467) 00:16:32.386 fused_ordering(468) 00:16:32.386 fused_ordering(469) 00:16:32.386 fused_ordering(470) 00:16:32.386 fused_ordering(471) 00:16:32.386 fused_ordering(472) 00:16:32.386 fused_ordering(473) 00:16:32.386 fused_ordering(474) 00:16:32.386 fused_ordering(475) 00:16:32.386 fused_ordering(476) 00:16:32.386 fused_ordering(477) 00:16:32.386 fused_ordering(478) 00:16:32.386 fused_ordering(479) 00:16:32.386 fused_ordering(480) 00:16:32.386 fused_ordering(481) 00:16:32.386 fused_ordering(482) 00:16:32.386 fused_ordering(483) 00:16:32.386 fused_ordering(484) 00:16:32.386 fused_ordering(485) 00:16:32.386 fused_ordering(486) 00:16:32.386 fused_ordering(487) 00:16:32.386 fused_ordering(488) 00:16:32.386 fused_ordering(489) 00:16:32.386 fused_ordering(490) 00:16:32.386 fused_ordering(491) 00:16:32.386 fused_ordering(492) 00:16:32.386 fused_ordering(493) 00:16:32.386 fused_ordering(494) 00:16:32.386 fused_ordering(495) 00:16:32.386 fused_ordering(496) 00:16:32.386 fused_ordering(497) 00:16:32.386 fused_ordering(498) 00:16:32.386 fused_ordering(499) 00:16:32.386 fused_ordering(500) 00:16:32.386 fused_ordering(501) 00:16:32.386 fused_ordering(502) 00:16:32.386 fused_ordering(503) 00:16:32.386 fused_ordering(504) 00:16:32.386 fused_ordering(505) 00:16:32.386 fused_ordering(506) 00:16:32.386 fused_ordering(507) 00:16:32.386 fused_ordering(508) 00:16:32.386 fused_ordering(509) 00:16:32.386 fused_ordering(510) 00:16:32.386 fused_ordering(511) 00:16:32.386 fused_ordering(512) 00:16:32.386 fused_ordering(513) 00:16:32.386 fused_ordering(514) 00:16:32.386 fused_ordering(515) 00:16:32.386 fused_ordering(516) 00:16:32.386 fused_ordering(517) 00:16:32.386 fused_ordering(518) 00:16:32.386 fused_ordering(519) 00:16:32.386 fused_ordering(520) 00:16:32.386 fused_ordering(521) 00:16:32.386 fused_ordering(522) 00:16:32.386 fused_ordering(523) 00:16:32.386 fused_ordering(524) 00:16:32.386 fused_ordering(525) 00:16:32.386 fused_ordering(526) 00:16:32.386 fused_ordering(527) 00:16:32.386 fused_ordering(528) 00:16:32.386 fused_ordering(529) 00:16:32.386 fused_ordering(530) 00:16:32.386 fused_ordering(531) 00:16:32.386 fused_ordering(532) 00:16:32.386 fused_ordering(533) 00:16:32.386 fused_ordering(534) 00:16:32.386 fused_ordering(535) 00:16:32.386 fused_ordering(536) 00:16:32.386 fused_ordering(537) 00:16:32.386 fused_ordering(538) 00:16:32.386 fused_ordering(539) 00:16:32.386 fused_ordering(540) 00:16:32.386 fused_ordering(541) 00:16:32.386 fused_ordering(542) 00:16:32.386 fused_ordering(543) 00:16:32.386 fused_ordering(544) 00:16:32.386 fused_ordering(545) 00:16:32.386 fused_ordering(546) 00:16:32.386 fused_ordering(547) 00:16:32.386 fused_ordering(548) 00:16:32.386 fused_ordering(549) 00:16:32.386 fused_ordering(550) 00:16:32.386 fused_ordering(551) 00:16:32.386 fused_ordering(552) 00:16:32.386 fused_ordering(553) 00:16:32.386 fused_ordering(554) 00:16:32.386 fused_ordering(555) 00:16:32.386 fused_ordering(556) 00:16:32.386 fused_ordering(557) 00:16:32.386 fused_ordering(558) 00:16:32.386 fused_ordering(559) 00:16:32.386 fused_ordering(560) 00:16:32.386 fused_ordering(561) 00:16:32.386 fused_ordering(562) 00:16:32.386 fused_ordering(563) 00:16:32.386 fused_ordering(564) 00:16:32.386 fused_ordering(565) 00:16:32.386 fused_ordering(566) 00:16:32.386 fused_ordering(567) 00:16:32.386 fused_ordering(568) 00:16:32.386 fused_ordering(569) 00:16:32.386 fused_ordering(570) 00:16:32.386 fused_ordering(571) 00:16:32.386 fused_ordering(572) 00:16:32.386 fused_ordering(573) 00:16:32.386 fused_ordering(574) 00:16:32.386 fused_ordering(575) 00:16:32.386 fused_ordering(576) 00:16:32.386 fused_ordering(577) 00:16:32.386 fused_ordering(578) 00:16:32.386 fused_ordering(579) 00:16:32.386 fused_ordering(580) 00:16:32.386 fused_ordering(581) 00:16:32.386 fused_ordering(582) 00:16:32.386 fused_ordering(583) 00:16:32.386 fused_ordering(584) 00:16:32.386 fused_ordering(585) 00:16:32.386 fused_ordering(586) 00:16:32.386 fused_ordering(587) 00:16:32.386 fused_ordering(588) 00:16:32.386 fused_ordering(589) 00:16:32.386 fused_ordering(590) 00:16:32.386 fused_ordering(591) 00:16:32.386 fused_ordering(592) 00:16:32.386 fused_ordering(593) 00:16:32.386 fused_ordering(594) 00:16:32.386 fused_ordering(595) 00:16:32.386 fused_ordering(596) 00:16:32.386 fused_ordering(597) 00:16:32.386 fused_ordering(598) 00:16:32.387 fused_ordering(599) 00:16:32.387 fused_ordering(600) 00:16:32.387 fused_ordering(601) 00:16:32.387 fused_ordering(602) 00:16:32.387 fused_ordering(603) 00:16:32.387 fused_ordering(604) 00:16:32.387 fused_ordering(605) 00:16:32.387 fused_ordering(606) 00:16:32.387 fused_ordering(607) 00:16:32.387 fused_ordering(608) 00:16:32.387 fused_ordering(609) 00:16:32.387 fused_ordering(610) 00:16:32.387 fused_ordering(611) 00:16:32.387 fused_ordering(612) 00:16:32.387 fused_ordering(613) 00:16:32.387 fused_ordering(614) 00:16:32.387 fused_ordering(615) 00:16:33.323 fused_ordering(616) 00:16:33.323 fused_ordering(617) 00:16:33.323 fused_ordering(618) 00:16:33.323 fused_ordering(619) 00:16:33.323 fused_ordering(620) 00:16:33.323 fused_ordering(621) 00:16:33.323 fused_ordering(622) 00:16:33.323 fused_ordering(623) 00:16:33.323 fused_ordering(624) 00:16:33.323 fused_ordering(625) 00:16:33.323 fused_ordering(626) 00:16:33.323 fused_ordering(627) 00:16:33.323 fused_ordering(628) 00:16:33.323 fused_ordering(629) 00:16:33.323 fused_ordering(630) 00:16:33.323 fused_ordering(631) 00:16:33.323 fused_ordering(632) 00:16:33.323 fused_ordering(633) 00:16:33.323 fused_ordering(634) 00:16:33.323 fused_ordering(635) 00:16:33.323 fused_ordering(636) 00:16:33.323 fused_ordering(637) 00:16:33.323 fused_ordering(638) 00:16:33.323 fused_ordering(639) 00:16:33.323 fused_ordering(640) 00:16:33.323 fused_ordering(641) 00:16:33.323 fused_ordering(642) 00:16:33.323 fused_ordering(643) 00:16:33.323 fused_ordering(644) 00:16:33.323 fused_ordering(645) 00:16:33.323 fused_ordering(646) 00:16:33.323 fused_ordering(647) 00:16:33.323 fused_ordering(648) 00:16:33.323 fused_ordering(649) 00:16:33.323 fused_ordering(650) 00:16:33.323 fused_ordering(651) 00:16:33.323 fused_ordering(652) 00:16:33.323 fused_ordering(653) 00:16:33.323 fused_ordering(654) 00:16:33.323 fused_ordering(655) 00:16:33.323 fused_ordering(656) 00:16:33.323 fused_ordering(657) 00:16:33.323 fused_ordering(658) 00:16:33.323 fused_ordering(659) 00:16:33.323 fused_ordering(660) 00:16:33.323 fused_ordering(661) 00:16:33.323 fused_ordering(662) 00:16:33.323 fused_ordering(663) 00:16:33.323 fused_ordering(664) 00:16:33.323 fused_ordering(665) 00:16:33.323 fused_ordering(666) 00:16:33.323 fused_ordering(667) 00:16:33.323 fused_ordering(668) 00:16:33.323 fused_ordering(669) 00:16:33.323 fused_ordering(670) 00:16:33.323 fused_ordering(671) 00:16:33.323 fused_ordering(672) 00:16:33.323 fused_ordering(673) 00:16:33.323 fused_ordering(674) 00:16:33.323 fused_ordering(675) 00:16:33.323 fused_ordering(676) 00:16:33.323 fused_ordering(677) 00:16:33.323 fused_ordering(678) 00:16:33.323 fused_ordering(679) 00:16:33.323 fused_ordering(680) 00:16:33.323 fused_ordering(681) 00:16:33.323 fused_ordering(682) 00:16:33.323 fused_ordering(683) 00:16:33.323 fused_ordering(684) 00:16:33.323 fused_ordering(685) 00:16:33.323 fused_ordering(686) 00:16:33.323 fused_ordering(687) 00:16:33.323 fused_ordering(688) 00:16:33.323 fused_ordering(689) 00:16:33.323 fused_ordering(690) 00:16:33.323 fused_ordering(691) 00:16:33.323 fused_ordering(692) 00:16:33.323 fused_ordering(693) 00:16:33.323 fused_ordering(694) 00:16:33.323 fused_ordering(695) 00:16:33.323 fused_ordering(696) 00:16:33.323 fused_ordering(697) 00:16:33.323 fused_ordering(698) 00:16:33.323 fused_ordering(699) 00:16:33.323 fused_ordering(700) 00:16:33.323 fused_ordering(701) 00:16:33.323 fused_ordering(702) 00:16:33.323 fused_ordering(703) 00:16:33.323 fused_ordering(704) 00:16:33.323 fused_ordering(705) 00:16:33.323 fused_ordering(706) 00:16:33.323 fused_ordering(707) 00:16:33.323 fused_ordering(708) 00:16:33.323 fused_ordering(709) 00:16:33.323 fused_ordering(710) 00:16:33.323 fused_ordering(711) 00:16:33.323 fused_ordering(712) 00:16:33.323 fused_ordering(713) 00:16:33.323 fused_ordering(714) 00:16:33.323 fused_ordering(715) 00:16:33.323 fused_ordering(716) 00:16:33.323 fused_ordering(717) 00:16:33.323 fused_ordering(718) 00:16:33.323 fused_ordering(719) 00:16:33.323 fused_ordering(720) 00:16:33.323 fused_ordering(721) 00:16:33.323 fused_ordering(722) 00:16:33.323 fused_ordering(723) 00:16:33.323 fused_ordering(724) 00:16:33.323 fused_ordering(725) 00:16:33.323 fused_ordering(726) 00:16:33.323 fused_ordering(727) 00:16:33.323 fused_ordering(728) 00:16:33.323 fused_ordering(729) 00:16:33.323 fused_ordering(730) 00:16:33.323 fused_ordering(731) 00:16:33.323 fused_ordering(732) 00:16:33.323 fused_ordering(733) 00:16:33.323 fused_ordering(734) 00:16:33.323 fused_ordering(735) 00:16:33.323 fused_ordering(736) 00:16:33.323 fused_ordering(737) 00:16:33.323 fused_ordering(738) 00:16:33.323 fused_ordering(739) 00:16:33.323 fused_ordering(740) 00:16:33.323 fused_ordering(741) 00:16:33.323 fused_ordering(742) 00:16:33.323 fused_ordering(743) 00:16:33.323 fused_ordering(744) 00:16:33.323 fused_ordering(745) 00:16:33.323 fused_ordering(746) 00:16:33.323 fused_ordering(747) 00:16:33.323 fused_ordering(748) 00:16:33.323 fused_ordering(749) 00:16:33.323 fused_ordering(750) 00:16:33.323 fused_ordering(751) 00:16:33.323 fused_ordering(752) 00:16:33.323 fused_ordering(753) 00:16:33.323 fused_ordering(754) 00:16:33.323 fused_ordering(755) 00:16:33.323 fused_ordering(756) 00:16:33.323 fused_ordering(757) 00:16:33.323 fused_ordering(758) 00:16:33.323 fused_ordering(759) 00:16:33.323 fused_ordering(760) 00:16:33.323 fused_ordering(761) 00:16:33.323 fused_ordering(762) 00:16:33.323 fused_ordering(763) 00:16:33.323 fused_ordering(764) 00:16:33.323 fused_ordering(765) 00:16:33.323 fused_ordering(766) 00:16:33.323 fused_ordering(767) 00:16:33.323 fused_ordering(768) 00:16:33.323 fused_ordering(769) 00:16:33.323 fused_ordering(770) 00:16:33.323 fused_ordering(771) 00:16:33.323 fused_ordering(772) 00:16:33.323 fused_ordering(773) 00:16:33.323 fused_ordering(774) 00:16:33.323 fused_ordering(775) 00:16:33.323 fused_ordering(776) 00:16:33.323 fused_ordering(777) 00:16:33.323 fused_ordering(778) 00:16:33.323 fused_ordering(779) 00:16:33.323 fused_ordering(780) 00:16:33.323 fused_ordering(781) 00:16:33.323 fused_ordering(782) 00:16:33.323 fused_ordering(783) 00:16:33.323 fused_ordering(784) 00:16:33.323 fused_ordering(785) 00:16:33.324 fused_ordering(786) 00:16:33.324 fused_ordering(787) 00:16:33.324 fused_ordering(788) 00:16:33.324 fused_ordering(789) 00:16:33.324 fused_ordering(790) 00:16:33.324 fused_ordering(791) 00:16:33.324 fused_ordering(792) 00:16:33.324 fused_ordering(793) 00:16:33.324 fused_ordering(794) 00:16:33.324 fused_ordering(795) 00:16:33.324 fused_ordering(796) 00:16:33.324 fused_ordering(797) 00:16:33.324 fused_ordering(798) 00:16:33.324 fused_ordering(799) 00:16:33.324 fused_ordering(800) 00:16:33.324 fused_ordering(801) 00:16:33.324 fused_ordering(802) 00:16:33.324 fused_ordering(803) 00:16:33.324 fused_ordering(804) 00:16:33.324 fused_ordering(805) 00:16:33.324 fused_ordering(806) 00:16:33.324 fused_ordering(807) 00:16:33.324 fused_ordering(808) 00:16:33.324 fused_ordering(809) 00:16:33.324 fused_ordering(810) 00:16:33.324 fused_ordering(811) 00:16:33.324 fused_ordering(812) 00:16:33.324 fused_ordering(813) 00:16:33.324 fused_ordering(814) 00:16:33.324 fused_ordering(815) 00:16:33.324 fused_ordering(816) 00:16:33.324 fused_ordering(817) 00:16:33.324 fused_ordering(818) 00:16:33.324 fused_ordering(819) 00:16:33.324 fused_ordering(820) 00:16:34.261 fused_ordering(821) 00:16:34.261 fused_ordering(822) 00:16:34.261 fused_ordering(823) 00:16:34.261 fused_ordering(824) 00:16:34.261 fused_ordering(825) 00:16:34.261 fused_ordering(826) 00:16:34.261 fused_ordering(827) 00:16:34.261 fused_ordering(828) 00:16:34.261 fused_ordering(829) 00:16:34.261 fused_ordering(830) 00:16:34.261 fused_ordering(831) 00:16:34.261 fused_ordering(832) 00:16:34.261 fused_ordering(833) 00:16:34.261 fused_ordering(834) 00:16:34.261 fused_ordering(835) 00:16:34.261 fused_ordering(836) 00:16:34.261 fused_ordering(837) 00:16:34.261 fused_ordering(838) 00:16:34.261 fused_ordering(839) 00:16:34.261 fused_ordering(840) 00:16:34.261 fused_ordering(841) 00:16:34.261 fused_ordering(842) 00:16:34.261 fused_ordering(843) 00:16:34.261 fused_ordering(844) 00:16:34.261 fused_ordering(845) 00:16:34.261 fused_ordering(846) 00:16:34.261 fused_ordering(847) 00:16:34.261 fused_ordering(848) 00:16:34.261 fused_ordering(849) 00:16:34.261 fused_ordering(850) 00:16:34.261 fused_ordering(851) 00:16:34.261 fused_ordering(852) 00:16:34.261 fused_ordering(853) 00:16:34.261 fused_ordering(854) 00:16:34.261 fused_ordering(855) 00:16:34.261 fused_ordering(856) 00:16:34.261 fused_ordering(857) 00:16:34.261 fused_ordering(858) 00:16:34.261 fused_ordering(859) 00:16:34.261 fused_ordering(860) 00:16:34.261 fused_ordering(861) 00:16:34.261 fused_ordering(862) 00:16:34.261 fused_ordering(863) 00:16:34.261 fused_ordering(864) 00:16:34.261 fused_ordering(865) 00:16:34.261 fused_ordering(866) 00:16:34.261 fused_ordering(867) 00:16:34.261 fused_ordering(868) 00:16:34.261 fused_ordering(869) 00:16:34.261 fused_ordering(870) 00:16:34.261 fused_ordering(871) 00:16:34.261 fused_ordering(872) 00:16:34.261 fused_ordering(873) 00:16:34.261 fused_ordering(874) 00:16:34.261 fused_ordering(875) 00:16:34.261 fused_ordering(876) 00:16:34.261 fused_ordering(877) 00:16:34.261 fused_ordering(878) 00:16:34.261 fused_ordering(879) 00:16:34.261 fused_ordering(880) 00:16:34.261 fused_ordering(881) 00:16:34.261 fused_ordering(882) 00:16:34.261 fused_ordering(883) 00:16:34.261 fused_ordering(884) 00:16:34.261 fused_ordering(885) 00:16:34.261 fused_ordering(886) 00:16:34.261 fused_ordering(887) 00:16:34.261 fused_ordering(888) 00:16:34.261 fused_ordering(889) 00:16:34.261 fused_ordering(890) 00:16:34.261 fused_ordering(891) 00:16:34.261 fused_ordering(892) 00:16:34.261 fused_ordering(893) 00:16:34.261 fused_ordering(894) 00:16:34.261 fused_ordering(895) 00:16:34.261 fused_ordering(896) 00:16:34.261 fused_ordering(897) 00:16:34.261 fused_ordering(898) 00:16:34.261 fused_ordering(899) 00:16:34.261 fused_ordering(900) 00:16:34.261 fused_ordering(901) 00:16:34.261 fused_ordering(902) 00:16:34.261 fused_ordering(903) 00:16:34.261 fused_ordering(904) 00:16:34.261 fused_ordering(905) 00:16:34.261 fused_ordering(906) 00:16:34.261 fused_ordering(907) 00:16:34.261 fused_ordering(908) 00:16:34.261 fused_ordering(909) 00:16:34.261 fused_ordering(910) 00:16:34.261 fused_ordering(911) 00:16:34.261 fused_ordering(912) 00:16:34.261 fused_ordering(913) 00:16:34.261 fused_ordering(914) 00:16:34.261 fused_ordering(915) 00:16:34.261 fused_ordering(916) 00:16:34.261 fused_ordering(917) 00:16:34.261 fused_ordering(918) 00:16:34.261 fused_ordering(919) 00:16:34.261 fused_ordering(920) 00:16:34.261 fused_ordering(921) 00:16:34.261 fused_ordering(922) 00:16:34.262 fused_ordering(923) 00:16:34.262 fused_ordering(924) 00:16:34.262 fused_ordering(925) 00:16:34.262 fused_ordering(926) 00:16:34.262 fused_ordering(927) 00:16:34.262 fused_ordering(928) 00:16:34.262 fused_ordering(929) 00:16:34.262 fused_ordering(930) 00:16:34.262 fused_ordering(931) 00:16:34.262 fused_ordering(932) 00:16:34.262 fused_ordering(933) 00:16:34.262 fused_ordering(934) 00:16:34.262 fused_ordering(935) 00:16:34.262 fused_ordering(936) 00:16:34.262 fused_ordering(937) 00:16:34.262 fused_ordering(938) 00:16:34.262 fused_ordering(939) 00:16:34.262 fused_ordering(940) 00:16:34.262 fused_ordering(941) 00:16:34.262 fused_ordering(942) 00:16:34.262 fused_ordering(943) 00:16:34.262 fused_ordering(944) 00:16:34.262 fused_ordering(945) 00:16:34.262 fused_ordering(946) 00:16:34.262 fused_ordering(947) 00:16:34.262 fused_ordering(948) 00:16:34.262 fused_ordering(949) 00:16:34.262 fused_ordering(950) 00:16:34.262 fused_ordering(951) 00:16:34.262 fused_ordering(952) 00:16:34.262 fused_ordering(953) 00:16:34.262 fused_ordering(954) 00:16:34.262 fused_ordering(955) 00:16:34.262 fused_ordering(956) 00:16:34.262 fused_ordering(957) 00:16:34.262 fused_ordering(958) 00:16:34.262 fused_ordering(959) 00:16:34.262 fused_ordering(960) 00:16:34.262 fused_ordering(961) 00:16:34.262 fused_ordering(962) 00:16:34.262 fused_ordering(963) 00:16:34.262 fused_ordering(964) 00:16:34.262 fused_ordering(965) 00:16:34.262 fused_ordering(966) 00:16:34.262 fused_ordering(967) 00:16:34.262 fused_ordering(968) 00:16:34.262 fused_ordering(969) 00:16:34.262 fused_ordering(970) 00:16:34.262 fused_ordering(971) 00:16:34.262 fused_ordering(972) 00:16:34.262 fused_ordering(973) 00:16:34.262 fused_ordering(974) 00:16:34.262 fused_ordering(975) 00:16:34.262 fused_ordering(976) 00:16:34.262 fused_ordering(977) 00:16:34.262 fused_ordering(978) 00:16:34.262 fused_ordering(979) 00:16:34.262 fused_ordering(980) 00:16:34.262 fused_ordering(981) 00:16:34.262 fused_ordering(982) 00:16:34.262 fused_ordering(983) 00:16:34.262 fused_ordering(984) 00:16:34.262 fused_ordering(985) 00:16:34.262 fused_ordering(986) 00:16:34.262 fused_ordering(987) 00:16:34.262 fused_ordering(988) 00:16:34.262 fused_ordering(989) 00:16:34.262 fused_ordering(990) 00:16:34.262 fused_ordering(991) 00:16:34.262 fused_ordering(992) 00:16:34.262 fused_ordering(993) 00:16:34.262 fused_ordering(994) 00:16:34.262 fused_ordering(995) 00:16:34.262 fused_ordering(996) 00:16:34.262 fused_ordering(997) 00:16:34.262 fused_ordering(998) 00:16:34.262 fused_ordering(999) 00:16:34.262 fused_ordering(1000) 00:16:34.262 fused_ordering(1001) 00:16:34.262 fused_ordering(1002) 00:16:34.262 fused_ordering(1003) 00:16:34.262 fused_ordering(1004) 00:16:34.262 fused_ordering(1005) 00:16:34.262 fused_ordering(1006) 00:16:34.262 fused_ordering(1007) 00:16:34.262 fused_ordering(1008) 00:16:34.262 fused_ordering(1009) 00:16:34.262 fused_ordering(1010) 00:16:34.262 fused_ordering(1011) 00:16:34.262 fused_ordering(1012) 00:16:34.262 fused_ordering(1013) 00:16:34.262 fused_ordering(1014) 00:16:34.262 fused_ordering(1015) 00:16:34.262 fused_ordering(1016) 00:16:34.262 fused_ordering(1017) 00:16:34.262 fused_ordering(1018) 00:16:34.262 fused_ordering(1019) 00:16:34.262 fused_ordering(1020) 00:16:34.262 fused_ordering(1021) 00:16:34.262 fused_ordering(1022) 00:16:34.262 fused_ordering(1023) 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.262 rmmod nvme_tcp 00:16:34.262 rmmod nvme_fabrics 00:16:34.262 rmmod nvme_keyring 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1723463 ']' 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1723463 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1723463 ']' 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1723463 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1723463 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1723463' 00:16:34.262 killing process with pid 1723463 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1723463 00:16:34.262 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1723463 00:16:34.523 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.523 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.523 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.523 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.523 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.523 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.523 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.523 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:36.467 00:16:36.467 real 0m8.451s 00:16:36.467 user 0m5.584s 00:16:36.467 sys 0m4.404s 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:36.467 ************************************ 00:16:36.467 END TEST nvmf_fused_ordering 00:16:36.467 ************************************ 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:36.467 ************************************ 00:16:36.467 START TEST nvmf_ns_masking 00:16:36.467 ************************************ 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:36.467 * Looking for test storage... 00:16:36.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:36.467 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1cfdaa73-7403-4dd8-b7fd-dd3168bd278e 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8b6c85c0-a47e-43c7-a997-f783bc25a6e5 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4d879ae4-b7c6-42a1-91b1-7ae5562ccc6f 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.726 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:38.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.728 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:38.728 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:38.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:38.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:38.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:16:38.729 00:16:38.729 --- 10.0.0.2 ping statistics --- 00:16:38.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.729 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:16:38.729 00:16:38.729 --- 10.0.0.1 ping statistics --- 00:16:38.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.729 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1725936 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1725936 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1725936 ']' 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.729 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:38.729 [2024-07-23 06:12:31.898236] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:16:38.729 [2024-07-23 06:12:31.898330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.729 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.729 [2024-07-23 06:12:31.936082] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:38.729 [2024-07-23 06:12:31.963510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.729 [2024-07-23 06:12:32.052211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.729 [2024-07-23 06:12:32.052276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.729 [2024-07-23 06:12:32.052289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.729 [2024-07-23 06:12:32.052300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.729 [2024-07-23 06:12:32.052310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.729 [2024-07-23 06:12:32.052341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.989 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.989 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:38.989 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:38.989 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:38.989 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:38.989 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.989 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:39.248 [2024-07-23 06:12:32.459048] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.248 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:39.248 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:39.248 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:39.506 Malloc1 00:16:39.506 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:39.764 Malloc2 00:16:39.764 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:40.021 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:40.279 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.538 [2024-07-23 06:12:33.770126] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.538 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:40.538 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d879ae4-b7c6-42a1-91b1-7ae5562ccc6f -a 10.0.0.2 -s 4420 -i 4 00:16:40.797 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.797 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:40.797 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.797 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:40.797 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:42.702 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:42.702 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:42.702 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.702 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:42.702 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.702 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:42.702 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:42.702 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:42.960 [ 0]:0x1 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b4a7ed3f6fa4e5e9226cea78b9525ee 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b4a7ed3f6fa4e5e9226cea78b9525ee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.960 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:43.218 [ 0]:0x1 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b4a7ed3f6fa4e5e9226cea78b9525ee 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b4a7ed3f6fa4e5e9226cea78b9525ee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:43.218 [ 1]:0x2 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06651bb80a8c437ba75a01aca311322e 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06651bb80a8c437ba75a01aca311322e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.218 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.476 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:43.735 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:43.735 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d879ae4-b7c6-42a1-91b1-7ae5562ccc6f -a 10.0.0.2 -s 4420 -i 4 00:16:43.994 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:43.994 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:43.994 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.994 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:43.994 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:43.994 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:46.532 [ 0]:0x2 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:46.532 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06651bb80a8c437ba75a01aca311322e 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06651bb80a8c437ba75a01aca311322e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:46.533 [ 0]:0x1 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b4a7ed3f6fa4e5e9226cea78b9525ee 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b4a7ed3f6fa4e5e9226cea78b9525ee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:46.533 [ 1]:0x2 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:46.533 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:46.791 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06651bb80a8c437ba75a01aca311322e 00:16:46.791 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06651bb80a8c437ba75a01aca311322e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.791 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.791 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:47.049 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:47.049 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:47.049 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:47.049 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:47.049 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:47.050 [ 0]:0x2 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06651bb80a8c437ba75a01aca311322e 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06651bb80a8c437ba75a01aca311322e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.050 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:47.309 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:47.309 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d879ae4-b7c6-42a1-91b1-7ae5562ccc6f -a 10.0.0.2 -s 4420 -i 4 00:16:47.572 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:47.572 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.572 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.572 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:47.572 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:47.572 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.480 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.480 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.480 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.480 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:49.480 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.480 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:49.480 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:49.480 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:49.739 [ 0]:0x1 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b4a7ed3f6fa4e5e9226cea78b9525ee 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b4a7ed3f6fa4e5e9226cea78b9525ee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:49.739 [ 1]:0x2 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06651bb80a8c437ba75a01aca311322e 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06651bb80a8c437ba75a01aca311322e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.739 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:49.997 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:49.997 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:49.997 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:49.997 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:49.997 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.997 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:49.997 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:49.998 [ 0]:0x2 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06651bb80a8c437ba75a01aca311322e 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06651bb80a8c437ba75a01aca311322e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:49.998 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:50.256 [2024-07-23 06:12:43.580044] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:50.256 request: 00:16:50.256 { 00:16:50.256 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.256 "nsid": 2, 00:16:50.256 "host": "nqn.2016-06.io.spdk:host1", 00:16:50.256 "method": "nvmf_ns_remove_host", 00:16:50.256 "req_id": 1 00:16:50.256 } 00:16:50.256 Got JSON-RPC error response 00:16:50.256 response: 00:16:50.256 { 00:16:50.256 "code": -32602, 00:16:50.256 "message": "Invalid parameters" 00:16:50.256 } 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:50.514 [ 0]:0x2 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06651bb80a8c437ba75a01aca311322e 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06651bb80a8c437ba75a01aca311322e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.514 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1727432 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1727432 /var/tmp/host.sock 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1727432 ']' 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:50.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.515 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.515 [2024-07-23 06:12:43.775222] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:16:50.515 [2024-07-23 06:12:43.775316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727432 ] 00:16:50.515 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.515 [2024-07-23 06:12:43.807383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:50.515 [2024-07-23 06:12:43.836662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.775 [2024-07-23 06:12:43.926005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.034 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.034 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:51.034 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:51.329 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:51.588 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1cfdaa73-7403-4dd8-b7fd-dd3168bd278e 00:16:51.588 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:51.588 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1CFDAA7374034DD8B7FDDD3168BD278E -i 00:16:51.588 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8b6c85c0-a47e-43c7-a997-f783bc25a6e5 00:16:51.588 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:51.588 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8B6C85C0A47E43C7A997F783BC25A6E5 -i 00:16:52.154 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:52.154 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:52.413 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:52.413 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:52.983 nvme0n1 00:16:52.983 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:52.983 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:53.256 nvme1n2 00:16:53.256 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:53.256 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:53.256 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:53.256 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:53.256 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:53.569 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:53.569 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:53.569 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:53.569 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:53.831 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1cfdaa73-7403-4dd8-b7fd-dd3168bd278e == \1\c\f\d\a\a\7\3\-\7\4\0\3\-\4\d\d\8\-\b\7\f\d\-\d\d\3\1\6\8\b\d\2\7\8\e ]] 00:16:53.831 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:53.831 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:53.831 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8b6c85c0-a47e-43c7-a997-f783bc25a6e5 == \8\b\6\c\8\5\c\0\-\a\4\7\e\-\4\3\c\7\-\a\9\9\7\-\f\7\8\3\b\c\2\5\a\6\e\5 ]] 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1727432 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1727432 ']' 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1727432 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1727432 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1727432' 00:16:54.091 killing process with pid 1727432 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1727432 00:16:54.091 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1727432 00:16:54.350 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.608 rmmod nvme_tcp 00:16:54.608 rmmod nvme_fabrics 00:16:54.608 rmmod nvme_keyring 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1725936 ']' 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1725936 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1725936 ']' 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1725936 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.608 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1725936 00:16:54.867 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:54.867 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:54.868 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1725936' 00:16:54.868 killing process with pid 1725936 00:16:54.868 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1725936 00:16:54.868 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1725936 00:16:55.126 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.126 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.126 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.126 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.126 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.126 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.126 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.126 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:57.033 00:16:57.033 real 0m20.546s 00:16:57.033 user 0m26.654s 00:16:57.033 sys 0m4.044s 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:57.033 ************************************ 00:16:57.033 END TEST nvmf_ns_masking 00:16:57.033 ************************************ 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.033 ************************************ 00:16:57.033 START TEST nvmf_nvme_cli 00:16:57.033 ************************************ 00:16:57.033 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:57.292 * Looking for test storage... 00:16:57.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.292 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:59.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:59.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.201 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:59.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:59.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:59.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:16:59.202 00:16:59.202 --- 10.0.0.2 ping statistics --- 00:16:59.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.202 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:16:59.202 00:16:59.202 --- 10.0.0.1 ping statistics --- 00:16:59.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.202 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:59.202 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1729919 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1729919 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1729919 ']' 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.463 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.463 [2024-07-23 06:12:52.604333] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:16:59.463 [2024-07-23 06:12:52.604419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.463 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.463 [2024-07-23 06:12:52.641764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:59.463 [2024-07-23 06:12:52.669136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.463 [2024-07-23 06:12:52.759250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.463 [2024-07-23 06:12:52.759314] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.463 [2024-07-23 06:12:52.759331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.463 [2024-07-23 06:12:52.759342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.463 [2024-07-23 06:12:52.759351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.463 [2024-07-23 06:12:52.759433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.463 [2024-07-23 06:12:52.759499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.463 [2024-07-23 06:12:52.759565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.463 [2024-07-23 06:12:52.759567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.722 [2024-07-23 06:12:52.917201] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.722 Malloc0 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.722 Malloc1 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.722 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.723 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 [2024-07-23 06:12:52.999025] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.723 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.723 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:59.723 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.723 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.723 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:59.982 00:16:59.982 Discovery Log Number of Records 2, Generation counter 2 00:16:59.982 =====Discovery Log Entry 0====== 00:16:59.982 trtype: tcp 00:16:59.982 adrfam: ipv4 00:16:59.982 subtype: current discovery subsystem 00:16:59.982 treq: not required 00:16:59.982 portid: 0 00:16:59.982 trsvcid: 4420 00:16:59.982 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:59.982 traddr: 10.0.0.2 00:16:59.982 eflags: explicit discovery connections, duplicate discovery information 00:16:59.982 sectype: none 00:16:59.982 =====Discovery Log Entry 1====== 00:16:59.982 trtype: tcp 00:16:59.982 adrfam: ipv4 00:16:59.982 subtype: nvme subsystem 00:16:59.982 treq: not required 00:16:59.982 portid: 0 00:16:59.982 trsvcid: 4420 00:16:59.982 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:59.982 traddr: 10.0.0.2 00:16:59.982 eflags: none 00:16:59.982 sectype: none 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:59.982 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.551 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:00.551 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:00.551 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.551 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:00.551 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:00.551 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.085 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:03.085 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:03.085 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.085 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:03.086 /dev/nvme0n1 ]] 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:03.086 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.346 rmmod nvme_tcp 00:17:03.346 rmmod nvme_fabrics 00:17:03.346 rmmod nvme_keyring 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1729919 ']' 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1729919 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1729919 ']' 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1729919 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1729919 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1729919' 00:17:03.346 killing process with pid 1729919 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1729919 00:17:03.346 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1729919 00:17:03.606 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.606 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.606 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.606 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.606 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.606 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.606 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.606 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.139 00:17:06.139 real 0m8.561s 00:17:06.139 user 0m16.641s 00:17:06.139 sys 0m2.197s 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:06.139 ************************************ 00:17:06.139 END TEST nvmf_nvme_cli 00:17:06.139 ************************************ 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.139 06:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.139 ************************************ 00:17:06.139 START TEST nvmf_vfio_user 00:17:06.140 ************************************ 00:17:06.140 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:06.140 * Looking for test storage... 00:17:06.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1730839 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1730839' 00:17:06.140 Process pid: 1730839 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1730839 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1730839 ']' 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:06.140 [2024-07-23 06:12:59.074704] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:06.140 [2024-07-23 06:12:59.074798] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.140 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.140 [2024-07-23 06:12:59.113496] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:06.140 [2024-07-23 06:12:59.144170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.140 [2024-07-23 06:12:59.239541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.140 [2024-07-23 06:12:59.239604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.140 [2024-07-23 06:12:59.239628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.140 [2024-07-23 06:12:59.239643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.140 [2024-07-23 06:12:59.239655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.140 [2024-07-23 06:12:59.239713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.140 [2024-07-23 06:12:59.239743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.140 [2024-07-23 06:12:59.239797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.140 [2024-07-23 06:12:59.239800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:17:06.140 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:07.076 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:07.359 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:07.359 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:07.359 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:07.359 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:07.359 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:07.624 Malloc1 00:17:07.624 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:07.882 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:08.141 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:08.399 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:08.399 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:08.399 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:08.657 Malloc2 00:17:08.657 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:08.915 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:09.173 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:09.432 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:09.432 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:09.432 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:09.432 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:09.432 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:09.432 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:09.432 [2024-07-23 06:13:02.670093] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:09.432 [2024-07-23 06:13:02.670130] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731266 ] 00:17:09.432 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.433 [2024-07-23 06:13:02.687288] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:09.433 [2024-07-23 06:13:02.704827] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:09.433 [2024-07-23 06:13:02.707296] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:09.433 [2024-07-23 06:13:02.707325] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb3fe8c3000 00:17:09.433 [2024-07-23 06:13:02.708288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.433 [2024-07-23 06:13:02.709286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.433 [2024-07-23 06:13:02.710287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.433 [2024-07-23 06:13:02.711294] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.433 [2024-07-23 06:13:02.712297] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.433 [2024-07-23 06:13:02.713303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.433 [2024-07-23 06:13:02.714304] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.433 [2024-07-23 06:13:02.715309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.433 [2024-07-23 06:13:02.716312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:09.433 [2024-07-23 06:13:02.716332] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb3fd685000 00:17:09.433 [2024-07-23 06:13:02.717444] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:09.433 [2024-07-23 06:13:02.731246] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:09.433 [2024-07-23 06:13:02.731279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:09.433 [2024-07-23 06:13:02.736417] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:09.433 [2024-07-23 06:13:02.736469] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:09.433 [2024-07-23 06:13:02.736560] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:09.433 [2024-07-23 06:13:02.736587] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:09.433 [2024-07-23 06:13:02.736621] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:09.433 [2024-07-23 06:13:02.737410] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:09.433 [2024-07-23 06:13:02.737431] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:09.433 [2024-07-23 06:13:02.737444] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:09.433 [2024-07-23 06:13:02.738413] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:09.433 [2024-07-23 06:13:02.738435] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:09.433 [2024-07-23 06:13:02.738449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:09.433 [2024-07-23 06:13:02.739421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:09.433 [2024-07-23 06:13:02.739440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:09.433 [2024-07-23 06:13:02.740428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:09.433 [2024-07-23 06:13:02.740445] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:09.433 [2024-07-23 06:13:02.740454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:09.433 [2024-07-23 06:13:02.740465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:09.433 [2024-07-23 06:13:02.740574] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:09.433 [2024-07-23 06:13:02.740582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:09.433 [2024-07-23 06:13:02.740590] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:09.433 [2024-07-23 06:13:02.741433] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:09.433 [2024-07-23 06:13:02.742436] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:09.433 [2024-07-23 06:13:02.745628] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:09.433 [2024-07-23 06:13:02.746445] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.433 [2024-07-23 06:13:02.746542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:09.433 [2024-07-23 06:13:02.747461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:09.433 [2024-07-23 06:13:02.747480] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:09.433 [2024-07-23 06:13:02.747489] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:09.433 [2024-07-23 06:13:02.747527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747549] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.433 [2024-07-23 06:13:02.747574] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.433 [2024-07-23 06:13:02.747580] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.433 [2024-07-23 06:13:02.747632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.433 [2024-07-23 06:13:02.747692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:09.433 [2024-07-23 06:13:02.747711] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:09.433 [2024-07-23 06:13:02.747720] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:09.433 [2024-07-23 06:13:02.747728] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:09.433 [2024-07-23 06:13:02.747735] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:09.433 [2024-07-23 06:13:02.747743] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:09.433 [2024-07-23 06:13:02.747751] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:09.433 [2024-07-23 06:13:02.747758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:09.433 [2024-07-23 06:13:02.747807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:09.433 [2024-07-23 06:13:02.747827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.433 [2024-07-23 06:13:02.747841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.433 [2024-07-23 06:13:02.747853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.433 [2024-07-23 06:13:02.747865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.433 [2024-07-23 06:13:02.747873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:09.433 [2024-07-23 06:13:02.747923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:09.433 [2024-07-23 06:13:02.747948] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:09.433 [2024-07-23 06:13:02.747962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:09.433 [2024-07-23 06:13:02.747994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:09.433 [2024-07-23 06:13:02.748009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:09.433 [2024-07-23 06:13:02.748075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748103] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:09.434 [2024-07-23 06:13:02.748110] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:09.434 [2024-07-23 06:13:02.748116] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.434 [2024-07-23 06:13:02.748125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748156] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:09.434 [2024-07-23 06:13:02.748174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748199] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.434 [2024-07-23 06:13:02.748206] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.434 [2024-07-23 06:13:02.748212] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.434 [2024-07-23 06:13:02.748221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748292] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.434 [2024-07-23 06:13:02.748300] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.434 [2024-07-23 06:13:02.748306] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.434 [2024-07-23 06:13:02.748315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748397] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748405] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:09.434 [2024-07-23 06:13:02.748412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:09.434 [2024-07-23 06:13:02.748420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:09.434 [2024-07-23 06:13:02.748445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748569] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:09.434 [2024-07-23 06:13:02.748578] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:09.434 [2024-07-23 06:13:02.748585] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:09.434 [2024-07-23 06:13:02.748605] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:09.434 [2024-07-23 06:13:02.748611] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:09.434 [2024-07-23 06:13:02.748628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:09.434 [2024-07-23 06:13:02.748641] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:09.434 [2024-07-23 06:13:02.748649] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:09.434 [2024-07-23 06:13:02.748670] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.434 [2024-07-23 06:13:02.748679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748691] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:09.434 [2024-07-23 06:13:02.748699] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.434 [2024-07-23 06:13:02.748705] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.434 [2024-07-23 06:13:02.748714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748729] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:09.434 [2024-07-23 06:13:02.748738] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:09.434 [2024-07-23 06:13:02.748744] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.434 [2024-07-23 06:13:02.748753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:09.434 [2024-07-23 06:13:02.748765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:09.434 [2024-07-23 06:13:02.748816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:09.434 ===================================================== 00:17:09.434 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:09.434 ===================================================== 00:17:09.434 Controller Capabilities/Features 00:17:09.434 ================================ 00:17:09.434 Vendor ID: 4e58 00:17:09.434 Subsystem Vendor ID: 4e58 00:17:09.434 Serial Number: SPDK1 00:17:09.434 Model Number: SPDK bdev Controller 00:17:09.434 Firmware Version: 24.09 00:17:09.434 Recommended Arb Burst: 6 00:17:09.434 IEEE OUI Identifier: 8d 6b 50 00:17:09.434 Multi-path I/O 00:17:09.434 May have multiple subsystem ports: Yes 00:17:09.434 May have multiple controllers: Yes 00:17:09.434 Associated with SR-IOV VF: No 00:17:09.434 Max Data Transfer Size: 131072 00:17:09.434 Max Number of Namespaces: 32 00:17:09.434 Max Number of I/O Queues: 127 00:17:09.434 NVMe Specification Version (VS): 1.3 00:17:09.434 NVMe Specification Version (Identify): 1.3 00:17:09.434 Maximum Queue Entries: 256 00:17:09.434 Contiguous Queues Required: Yes 00:17:09.434 Arbitration Mechanisms Supported 00:17:09.434 Weighted Round Robin: Not Supported 00:17:09.434 Vendor Specific: Not Supported 00:17:09.434 Reset Timeout: 15000 ms 00:17:09.434 Doorbell Stride: 4 bytes 00:17:09.434 NVM Subsystem Reset: Not Supported 00:17:09.434 Command Sets Supported 00:17:09.434 NVM Command Set: Supported 00:17:09.434 Boot Partition: Not Supported 00:17:09.434 Memory Page Size Minimum: 4096 bytes 00:17:09.434 Memory Page Size Maximum: 4096 bytes 00:17:09.434 Persistent Memory Region: Not Supported 00:17:09.434 Optional Asynchronous Events Supported 00:17:09.434 Namespace Attribute Notices: Supported 00:17:09.434 Firmware Activation Notices: Not Supported 00:17:09.434 ANA Change Notices: Not Supported 00:17:09.434 PLE Aggregate Log Change Notices: Not Supported 00:17:09.434 LBA Status Info Alert Notices: Not Supported 00:17:09.434 EGE Aggregate Log Change Notices: Not Supported 00:17:09.434 Normal NVM Subsystem Shutdown event: Not Supported 00:17:09.434 Zone Descriptor Change Notices: Not Supported 00:17:09.434 Discovery Log Change Notices: Not Supported 00:17:09.434 Controller Attributes 00:17:09.434 128-bit Host Identifier: Supported 00:17:09.434 Non-Operational Permissive Mode: Not Supported 00:17:09.434 NVM Sets: Not Supported 00:17:09.434 Read Recovery Levels: Not Supported 00:17:09.435 Endurance Groups: Not Supported 00:17:09.435 Predictable Latency Mode: Not Supported 00:17:09.435 Traffic Based Keep ALive: Not Supported 00:17:09.435 Namespace Granularity: Not Supported 00:17:09.435 SQ Associations: Not Supported 00:17:09.435 UUID List: Not Supported 00:17:09.435 Multi-Domain Subsystem: Not Supported 00:17:09.435 Fixed Capacity Management: Not Supported 00:17:09.435 Variable Capacity Management: Not Supported 00:17:09.435 Delete Endurance Group: Not Supported 00:17:09.435 Delete NVM Set: Not Supported 00:17:09.435 Extended LBA Formats Supported: Not Supported 00:17:09.435 Flexible Data Placement Supported: Not Supported 00:17:09.435 00:17:09.435 Controller Memory Buffer Support 00:17:09.435 ================================ 00:17:09.435 Supported: No 00:17:09.435 00:17:09.435 Persistent Memory Region Support 00:17:09.435 ================================ 00:17:09.435 Supported: No 00:17:09.435 00:17:09.435 Admin Command Set Attributes 00:17:09.435 ============================ 00:17:09.435 Security Send/Receive: Not Supported 00:17:09.435 Format NVM: Not Supported 00:17:09.435 Firmware Activate/Download: Not Supported 00:17:09.435 Namespace Management: Not Supported 00:17:09.435 Device Self-Test: Not Supported 00:17:09.435 Directives: Not Supported 00:17:09.435 NVMe-MI: Not Supported 00:17:09.435 Virtualization Management: Not Supported 00:17:09.435 Doorbell Buffer Config: Not Supported 00:17:09.435 Get LBA Status Capability: Not Supported 00:17:09.435 Command & Feature Lockdown Capability: Not Supported 00:17:09.435 Abort Command Limit: 4 00:17:09.435 Async Event Request Limit: 4 00:17:09.435 Number of Firmware Slots: N/A 00:17:09.435 Firmware Slot 1 Read-Only: N/A 00:17:09.435 Firmware Activation Without Reset: N/A 00:17:09.435 Multiple Update Detection Support: N/A 00:17:09.435 Firmware Update Granularity: No Information Provided 00:17:09.435 Per-Namespace SMART Log: No 00:17:09.435 Asymmetric Namespace Access Log Page: Not Supported 00:17:09.435 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:09.435 Command Effects Log Page: Supported 00:17:09.435 Get Log Page Extended Data: Supported 00:17:09.435 Telemetry Log Pages: Not Supported 00:17:09.435 Persistent Event Log Pages: Not Supported 00:17:09.435 Supported Log Pages Log Page: May Support 00:17:09.435 Commands Supported & Effects Log Page: Not Supported 00:17:09.435 Feature Identifiers & Effects Log Page:May Support 00:17:09.435 NVMe-MI Commands & Effects Log Page: May Support 00:17:09.435 Data Area 4 for Telemetry Log: Not Supported 00:17:09.435 Error Log Page Entries Supported: 128 00:17:09.435 Keep Alive: Supported 00:17:09.435 Keep Alive Granularity: 10000 ms 00:17:09.435 00:17:09.435 NVM Command Set Attributes 00:17:09.435 ========================== 00:17:09.435 Submission Queue Entry Size 00:17:09.435 Max: 64 00:17:09.435 Min: 64 00:17:09.435 Completion Queue Entry Size 00:17:09.435 Max: 16 00:17:09.435 Min: 16 00:17:09.435 Number of Namespaces: 32 00:17:09.435 Compare Command: Supported 00:17:09.435 Write Uncorrectable Command: Not Supported 00:17:09.435 Dataset Management Command: Supported 00:17:09.435 Write Zeroes Command: Supported 00:17:09.435 Set Features Save Field: Not Supported 00:17:09.435 Reservations: Not Supported 00:17:09.435 Timestamp: Not Supported 00:17:09.435 Copy: Supported 00:17:09.435 Volatile Write Cache: Present 00:17:09.435 Atomic Write Unit (Normal): 1 00:17:09.435 Atomic Write Unit (PFail): 1 00:17:09.435 Atomic Compare & Write Unit: 1 00:17:09.435 Fused Compare & Write: Supported 00:17:09.435 Scatter-Gather List 00:17:09.435 SGL Command Set: Supported (Dword aligned) 00:17:09.435 SGL Keyed: Not Supported 00:17:09.435 SGL Bit Bucket Descriptor: Not Supported 00:17:09.435 SGL Metadata Pointer: Not Supported 00:17:09.435 Oversized SGL: Not Supported 00:17:09.435 SGL Metadata Address: Not Supported 00:17:09.435 SGL Offset: Not Supported 00:17:09.435 Transport SGL Data Block: Not Supported 00:17:09.435 Replay Protected Memory Block: Not Supported 00:17:09.435 00:17:09.435 Firmware Slot Information 00:17:09.435 ========================= 00:17:09.435 Active slot: 1 00:17:09.435 Slot 1 Firmware Revision: 24.09 00:17:09.435 00:17:09.435 00:17:09.435 Commands Supported and Effects 00:17:09.435 ============================== 00:17:09.435 Admin Commands 00:17:09.435 -------------- 00:17:09.435 Get Log Page (02h): Supported 00:17:09.435 Identify (06h): Supported 00:17:09.435 Abort (08h): Supported 00:17:09.435 Set Features (09h): Supported 00:17:09.435 Get Features (0Ah): Supported 00:17:09.435 Asynchronous Event Request (0Ch): Supported 00:17:09.435 Keep Alive (18h): Supported 00:17:09.435 I/O Commands 00:17:09.435 ------------ 00:17:09.435 Flush (00h): Supported LBA-Change 00:17:09.435 Write (01h): Supported LBA-Change 00:17:09.435 Read (02h): Supported 00:17:09.435 Compare (05h): Supported 00:17:09.435 Write Zeroes (08h): Supported LBA-Change 00:17:09.435 Dataset Management (09h): Supported LBA-Change 00:17:09.435 Copy (19h): Supported LBA-Change 00:17:09.435 00:17:09.435 Error Log 00:17:09.435 ========= 00:17:09.435 00:17:09.435 Arbitration 00:17:09.435 =========== 00:17:09.435 Arbitration Burst: 1 00:17:09.435 00:17:09.435 Power Management 00:17:09.435 ================ 00:17:09.435 Number of Power States: 1 00:17:09.435 Current Power State: Power State #0 00:17:09.435 Power State #0: 00:17:09.435 Max Power: 0.00 W 00:17:09.435 Non-Operational State: Operational 00:17:09.435 Entry Latency: Not Reported 00:17:09.435 Exit Latency: Not Reported 00:17:09.435 Relative Read Throughput: 0 00:17:09.435 Relative Read Latency: 0 00:17:09.435 Relative Write Throughput: 0 00:17:09.435 Relative Write Latency: 0 00:17:09.435 Idle Power: Not Reported 00:17:09.435 Active Power: Not Reported 00:17:09.435 Non-Operational Permissive Mode: Not Supported 00:17:09.435 00:17:09.435 Health Information 00:17:09.435 ================== 00:17:09.435 Critical Warnings: 00:17:09.435 Available Spare Space: OK 00:17:09.435 Temperature: OK 00:17:09.435 Device Reliability: OK 00:17:09.435 Read Only: No 00:17:09.435 Volatile Memory Backup: OK 00:17:09.435 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:09.435 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:09.435 Available Spare: 0% 00:17:09.435 Available Sp[2024-07-23 06:13:02.748970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:09.435 [2024-07-23 06:13:02.748987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:09.435 [2024-07-23 06:13:02.749042] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:09.435 [2024-07-23 06:13:02.749060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.435 [2024-07-23 06:13:02.749070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.435 [2024-07-23 06:13:02.749080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.435 [2024-07-23 06:13:02.749089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.435 [2024-07-23 06:13:02.749468] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:09.435 [2024-07-23 06:13:02.749487] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:09.435 [2024-07-23 06:13:02.750468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.435 [2024-07-23 06:13:02.750539] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:09.435 [2024-07-23 06:13:02.750553] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:09.435 [2024-07-23 06:13:02.751475] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:09.435 [2024-07-23 06:13:02.751498] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:09.435 [2024-07-23 06:13:02.751549] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:09.435 [2024-07-23 06:13:02.755625] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:09.696 are Threshold: 0% 00:17:09.696 Life Percentage Used: 0% 00:17:09.696 Data Units Read: 0 00:17:09.696 Data Units Written: 0 00:17:09.696 Host Read Commands: 0 00:17:09.696 Host Write Commands: 0 00:17:09.696 Controller Busy Time: 0 minutes 00:17:09.696 Power Cycles: 0 00:17:09.696 Power On Hours: 0 hours 00:17:09.696 Unsafe Shutdowns: 0 00:17:09.696 Unrecoverable Media Errors: 0 00:17:09.696 Lifetime Error Log Entries: 0 00:17:09.696 Warning Temperature Time: 0 minutes 00:17:09.696 Critical Temperature Time: 0 minutes 00:17:09.696 00:17:09.696 Number of Queues 00:17:09.696 ================ 00:17:09.696 Number of I/O Submission Queues: 127 00:17:09.696 Number of I/O Completion Queues: 127 00:17:09.696 00:17:09.696 Active Namespaces 00:17:09.696 ================= 00:17:09.696 Namespace ID:1 00:17:09.696 Error Recovery Timeout: Unlimited 00:17:09.696 Command Set Identifier: NVM (00h) 00:17:09.696 Deallocate: Supported 00:17:09.696 Deallocated/Unwritten Error: Not Supported 00:17:09.696 Deallocated Read Value: Unknown 00:17:09.696 Deallocate in Write Zeroes: Not Supported 00:17:09.696 Deallocated Guard Field: 0xFFFF 00:17:09.696 Flush: Supported 00:17:09.696 Reservation: Supported 00:17:09.696 Namespace Sharing Capabilities: Multiple Controllers 00:17:09.696 Size (in LBAs): 131072 (0GiB) 00:17:09.696 Capacity (in LBAs): 131072 (0GiB) 00:17:09.696 Utilization (in LBAs): 131072 (0GiB) 00:17:09.696 NGUID: F5835EBABFB74245858A26A7FA2EAB2C 00:17:09.696 UUID: f5835eba-bfb7-4245-858a-26a7fa2eab2c 00:17:09.696 Thin Provisioning: Not Supported 00:17:09.696 Per-NS Atomic Units: Yes 00:17:09.696 Atomic Boundary Size (Normal): 0 00:17:09.696 Atomic Boundary Size (PFail): 0 00:17:09.696 Atomic Boundary Offset: 0 00:17:09.696 Maximum Single Source Range Length: 65535 00:17:09.696 Maximum Copy Length: 65535 00:17:09.696 Maximum Source Range Count: 1 00:17:09.696 NGUID/EUI64 Never Reused: No 00:17:09.696 Namespace Write Protected: No 00:17:09.696 Number of LBA Formats: 1 00:17:09.696 Current LBA Format: LBA Format #00 00:17:09.696 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:09.696 00:17:09.696 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:09.696 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.696 [2024-07-23 06:13:02.987447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:14.971 Initializing NVMe Controllers 00:17:14.971 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:14.971 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:14.971 Initialization complete. Launching workers. 00:17:14.971 ======================================================== 00:17:14.971 Latency(us) 00:17:14.971 Device Information : IOPS MiB/s Average min max 00:17:14.971 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35459.73 138.51 3608.96 1137.04 9007.52 00:17:14.971 ======================================================== 00:17:14.971 Total : 35459.73 138.51 3608.96 1137.04 9007.52 00:17:14.971 00:17:14.971 [2024-07-23 06:13:08.014186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:14.971 06:13:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:14.971 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.971 [2024-07-23 06:13:08.254333] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:20.242 Initializing NVMe Controllers 00:17:20.242 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:20.242 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:20.242 Initialization complete. Launching workers. 00:17:20.242 ======================================================== 00:17:20.242 Latency(us) 00:17:20.242 Device Information : IOPS MiB/s Average min max 00:17:20.242 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16006.00 62.52 8005.36 4993.39 15959.66 00:17:20.242 ======================================================== 00:17:20.242 Total : 16006.00 62.52 8005.36 4993.39 15959.66 00:17:20.242 00:17:20.242 [2024-07-23 06:13:13.289371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:20.242 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:20.242 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.242 [2024-07-23 06:13:13.492422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.516 [2024-07-23 06:13:18.566955] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.516 Initializing NVMe Controllers 00:17:25.516 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:25.516 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:25.516 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:25.516 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:25.516 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:25.516 Initialization complete. Launching workers. 00:17:25.516 Starting thread on core 2 00:17:25.516 Starting thread on core 3 00:17:25.516 Starting thread on core 1 00:17:25.516 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:25.516 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.775 [2024-07-23 06:13:18.874120] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:29.069 [2024-07-23 06:13:21.947350] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:29.069 Initializing NVMe Controllers 00:17:29.069 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:29.069 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:29.069 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:29.069 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:29.069 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:29.069 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:29.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:29.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:29.069 Initialization complete. Launching workers. 00:17:29.069 Starting thread on core 1 with urgent priority queue 00:17:29.069 Starting thread on core 2 with urgent priority queue 00:17:29.069 Starting thread on core 3 with urgent priority queue 00:17:29.069 Starting thread on core 0 with urgent priority queue 00:17:29.069 SPDK bdev Controller (SPDK1 ) core 0: 5720.33 IO/s 17.48 secs/100000 ios 00:17:29.069 SPDK bdev Controller (SPDK1 ) core 1: 5521.67 IO/s 18.11 secs/100000 ios 00:17:29.069 SPDK bdev Controller (SPDK1 ) core 2: 5966.67 IO/s 16.76 secs/100000 ios 00:17:29.069 SPDK bdev Controller (SPDK1 ) core 3: 5619.33 IO/s 17.80 secs/100000 ios 00:17:29.069 ======================================================== 00:17:29.069 00:17:29.069 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:29.069 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.069 [2024-07-23 06:13:22.249119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:29.069 Initializing NVMe Controllers 00:17:29.069 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:29.069 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:29.069 Namespace ID: 1 size: 0GB 00:17:29.069 Initialization complete. 00:17:29.069 INFO: using host memory buffer for IO 00:17:29.069 Hello world! 00:17:29.069 [2024-07-23 06:13:22.283711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:29.069 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:29.069 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.329 [2024-07-23 06:13:22.571048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:30.264 Initializing NVMe Controllers 00:17:30.264 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:30.264 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:30.264 Initialization complete. Launching workers. 00:17:30.264 submit (in ns) avg, min, max = 8050.2, 3538.9, 4015858.9 00:17:30.264 complete (in ns) avg, min, max = 24126.4, 2065.6, 4014321.1 00:17:30.264 00:17:30.264 Submit histogram 00:17:30.264 ================ 00:17:30.264 Range in us Cumulative Count 00:17:30.264 3.532 - 3.556: 0.4708% ( 64) 00:17:30.264 3.556 - 3.579: 1.9127% ( 196) 00:17:30.264 3.579 - 3.603: 4.9879% ( 418) 00:17:30.264 3.603 - 3.627: 11.8958% ( 939) 00:17:30.264 3.627 - 3.650: 21.9672% ( 1369) 00:17:30.264 3.650 - 3.674: 31.9576% ( 1358) 00:17:30.264 3.674 - 3.698: 40.1383% ( 1112) 00:17:30.264 3.698 - 3.721: 47.6569% ( 1022) 00:17:30.264 3.721 - 3.745: 53.4761% ( 791) 00:17:30.264 3.745 - 3.769: 59.0598% ( 759) 00:17:30.264 3.769 - 3.793: 64.2978% ( 712) 00:17:30.264 3.793 - 3.816: 68.5206% ( 574) 00:17:30.264 3.816 - 3.840: 71.7428% ( 438) 00:17:30.264 3.840 - 3.864: 75.3844% ( 495) 00:17:30.264 3.864 - 3.887: 78.7906% ( 463) 00:17:30.264 3.887 - 3.911: 81.9319% ( 427) 00:17:30.264 3.911 - 3.935: 85.1247% ( 434) 00:17:30.264 3.935 - 3.959: 87.2729% ( 292) 00:17:30.264 3.959 - 3.982: 89.2224% ( 265) 00:17:30.264 3.982 - 4.006: 90.9512% ( 235) 00:17:30.264 4.006 - 4.030: 92.6506% ( 231) 00:17:30.264 4.030 - 4.053: 94.0263% ( 187) 00:17:30.264 4.053 - 4.077: 95.1078% ( 147) 00:17:30.264 4.077 - 4.101: 95.7920% ( 93) 00:17:30.264 4.101 - 4.124: 96.2554% ( 63) 00:17:30.265 4.124 - 4.148: 96.5497% ( 40) 00:17:30.265 4.148 - 4.172: 96.7115% ( 22) 00:17:30.265 4.172 - 4.196: 96.8366% ( 17) 00:17:30.265 4.196 - 4.219: 96.9690% ( 18) 00:17:30.265 4.219 - 4.243: 97.1088% ( 19) 00:17:30.265 4.243 - 4.267: 97.1971% ( 12) 00:17:30.265 4.267 - 4.290: 97.2633% ( 9) 00:17:30.265 4.290 - 4.314: 97.3295% ( 9) 00:17:30.265 4.314 - 4.338: 97.4251% ( 13) 00:17:30.265 4.338 - 4.361: 97.4840% ( 8) 00:17:30.265 4.361 - 4.385: 97.4987% ( 2) 00:17:30.265 4.385 - 4.409: 97.5502% ( 7) 00:17:30.265 4.409 - 4.433: 97.5723% ( 3) 00:17:30.265 4.433 - 4.456: 97.5870% ( 2) 00:17:30.265 4.456 - 4.480: 97.6091% ( 3) 00:17:30.265 4.480 - 4.504: 97.6238% ( 2) 00:17:30.265 4.504 - 4.527: 97.6311% ( 1) 00:17:30.265 4.527 - 4.551: 97.6532% ( 3) 00:17:30.265 4.551 - 4.575: 97.6973% ( 6) 00:17:30.265 4.575 - 4.599: 97.7268% ( 4) 00:17:30.265 4.599 - 4.622: 97.7562% ( 4) 00:17:30.265 4.622 - 4.646: 97.8151% ( 8) 00:17:30.265 4.646 - 4.670: 97.8518% ( 5) 00:17:30.265 4.670 - 4.693: 97.8960% ( 6) 00:17:30.265 4.693 - 4.717: 97.9401% ( 6) 00:17:30.265 4.717 - 4.741: 97.9843% ( 6) 00:17:30.265 4.741 - 4.764: 98.0505% ( 9) 00:17:30.265 4.764 - 4.788: 98.0873% ( 5) 00:17:30.265 4.788 - 4.812: 98.1314% ( 6) 00:17:30.265 4.812 - 4.836: 98.1829% ( 7) 00:17:30.265 4.836 - 4.859: 98.2270% ( 6) 00:17:30.265 4.859 - 4.883: 98.2344% ( 1) 00:17:30.265 4.883 - 4.907: 98.2565% ( 3) 00:17:30.265 4.907 - 4.930: 98.2785% ( 3) 00:17:30.265 4.954 - 4.978: 98.2932% ( 2) 00:17:30.265 4.978 - 5.001: 98.3153% ( 3) 00:17:30.265 5.001 - 5.025: 98.3300% ( 2) 00:17:30.265 5.025 - 5.049: 98.3521% ( 3) 00:17:30.265 5.049 - 5.073: 98.3594% ( 1) 00:17:30.265 5.073 - 5.096: 98.3742% ( 2) 00:17:30.265 5.120 - 5.144: 98.3815% ( 1) 00:17:30.265 5.215 - 5.239: 98.3962% ( 2) 00:17:30.265 5.428 - 5.452: 98.4036% ( 1) 00:17:30.265 5.594 - 5.618: 98.4109% ( 1) 00:17:30.265 5.641 - 5.665: 98.4183% ( 1) 00:17:30.265 5.879 - 5.902: 98.4257% ( 1) 00:17:30.265 6.353 - 6.400: 98.4404% ( 2) 00:17:30.265 6.400 - 6.447: 98.4477% ( 1) 00:17:30.265 6.447 - 6.495: 98.4551% ( 1) 00:17:30.265 6.495 - 6.542: 98.4624% ( 1) 00:17:30.265 6.732 - 6.779: 98.4698% ( 1) 00:17:30.265 6.921 - 6.969: 98.4919% ( 3) 00:17:30.265 7.016 - 7.064: 98.4992% ( 1) 00:17:30.265 7.064 - 7.111: 98.5066% ( 1) 00:17:30.265 7.111 - 7.159: 98.5213% ( 2) 00:17:30.265 7.253 - 7.301: 98.5287% ( 1) 00:17:30.265 7.443 - 7.490: 98.5434% ( 2) 00:17:30.265 7.585 - 7.633: 98.5654% ( 3) 00:17:30.265 7.633 - 7.680: 98.5728% ( 1) 00:17:30.265 7.727 - 7.775: 98.5875% ( 2) 00:17:30.265 7.775 - 7.822: 98.6022% ( 2) 00:17:30.265 7.822 - 7.870: 98.6096% ( 1) 00:17:30.265 7.870 - 7.917: 98.6169% ( 1) 00:17:30.265 7.917 - 7.964: 98.6243% ( 1) 00:17:30.265 7.964 - 8.012: 98.6390% ( 2) 00:17:30.265 8.012 - 8.059: 98.6464% ( 1) 00:17:30.265 8.201 - 8.249: 98.6611% ( 2) 00:17:30.265 8.249 - 8.296: 98.6684% ( 1) 00:17:30.265 8.296 - 8.344: 98.6758% ( 1) 00:17:30.265 8.581 - 8.628: 98.6831% ( 1) 00:17:30.265 8.676 - 8.723: 98.6979% ( 2) 00:17:30.265 8.723 - 8.770: 98.7052% ( 1) 00:17:30.265 8.818 - 8.865: 98.7126% ( 1) 00:17:30.265 8.865 - 8.913: 98.7199% ( 1) 00:17:30.265 9.007 - 9.055: 98.7273% ( 1) 00:17:30.265 9.102 - 9.150: 98.7420% ( 2) 00:17:30.265 9.150 - 9.197: 98.7494% ( 1) 00:17:30.265 9.197 - 9.244: 98.7788% ( 4) 00:17:30.265 9.244 - 9.292: 98.7861% ( 1) 00:17:30.265 9.292 - 9.339: 98.7935% ( 1) 00:17:30.265 9.434 - 9.481: 98.8009% ( 1) 00:17:30.265 9.529 - 9.576: 98.8082% ( 1) 00:17:30.265 9.576 - 9.624: 98.8156% ( 1) 00:17:30.265 9.861 - 9.908: 98.8229% ( 1) 00:17:30.265 9.908 - 9.956: 98.8303% ( 1) 00:17:30.265 9.956 - 10.003: 98.8450% ( 2) 00:17:30.265 10.145 - 10.193: 98.8524% ( 1) 00:17:30.265 10.477 - 10.524: 98.8597% ( 1) 00:17:30.265 10.761 - 10.809: 98.8671% ( 1) 00:17:30.265 10.951 - 10.999: 98.8744% ( 1) 00:17:30.265 11.046 - 11.093: 98.8818% ( 1) 00:17:30.265 11.093 - 11.141: 98.8891% ( 1) 00:17:30.265 11.283 - 11.330: 98.8965% ( 1) 00:17:30.265 11.520 - 11.567: 98.9038% ( 1) 00:17:30.265 11.567 - 11.615: 98.9112% ( 1) 00:17:30.265 11.662 - 11.710: 98.9186% ( 1) 00:17:30.265 11.994 - 12.041: 98.9259% ( 1) 00:17:30.265 12.326 - 12.421: 98.9333% ( 1) 00:17:30.265 12.421 - 12.516: 98.9406% ( 1) 00:17:30.265 12.516 - 12.610: 98.9553% ( 2) 00:17:30.265 12.705 - 12.800: 98.9627% ( 1) 00:17:30.265 12.895 - 12.990: 98.9701% ( 1) 00:17:30.265 12.990 - 13.084: 98.9848% ( 2) 00:17:30.265 13.179 - 13.274: 98.9921% ( 1) 00:17:30.265 13.369 - 13.464: 98.9995% ( 1) 00:17:30.265 13.464 - 13.559: 99.0068% ( 1) 00:17:30.265 13.559 - 13.653: 99.0142% ( 1) 00:17:30.265 13.653 - 13.748: 99.0289% ( 2) 00:17:30.265 13.843 - 13.938: 99.0363% ( 1) 00:17:30.265 14.033 - 14.127: 99.0436% ( 1) 00:17:30.265 16.972 - 17.067: 99.0510% ( 1) 00:17:30.265 17.161 - 17.256: 99.0657% ( 2) 00:17:30.265 17.256 - 17.351: 99.0878% ( 3) 00:17:30.265 17.351 - 17.446: 99.1393% ( 7) 00:17:30.265 17.446 - 17.541: 99.1687% ( 4) 00:17:30.265 17.541 - 17.636: 99.2349% ( 9) 00:17:30.265 17.636 - 17.730: 99.2717% ( 5) 00:17:30.265 17.730 - 17.825: 99.3085% ( 5) 00:17:30.265 17.825 - 17.920: 99.3379% ( 4) 00:17:30.265 17.920 - 18.015: 99.3967% ( 8) 00:17:30.265 18.015 - 18.110: 99.4409% ( 6) 00:17:30.265 18.110 - 18.204: 99.5145% ( 10) 00:17:30.265 18.204 - 18.299: 99.5880% ( 10) 00:17:30.265 18.299 - 18.394: 99.6101% ( 3) 00:17:30.265 18.394 - 18.489: 99.6395% ( 4) 00:17:30.265 18.489 - 18.584: 99.6837% ( 6) 00:17:30.265 18.584 - 18.679: 99.7131% ( 4) 00:17:30.265 18.679 - 18.773: 99.7278% ( 2) 00:17:30.265 18.773 - 18.868: 99.7572% ( 4) 00:17:30.265 18.868 - 18.963: 99.7940% ( 5) 00:17:30.265 18.963 - 19.058: 99.8087% ( 2) 00:17:30.265 19.058 - 19.153: 99.8234% ( 2) 00:17:30.265 19.153 - 19.247: 99.8382% ( 2) 00:17:30.265 19.247 - 19.342: 99.8455% ( 1) 00:17:30.265 19.437 - 19.532: 99.8529% ( 1) 00:17:30.265 19.627 - 19.721: 99.8602% ( 1) 00:17:30.265 19.911 - 20.006: 99.8676% ( 1) 00:17:30.265 21.523 - 21.618: 99.8749% ( 1) 00:17:30.265 22.850 - 22.945: 99.8823% ( 1) 00:17:30.265 23.135 - 23.230: 99.8896% ( 1) 00:17:30.265 24.841 - 25.031: 99.8970% ( 1) 00:17:30.265 3980.705 - 4004.978: 99.9853% ( 12) 00:17:30.265 4004.978 - 4029.250: 100.0000% ( 2) 00:17:30.265 00:17:30.265 Complete histogram 00:17:30.265 ================== 00:17:30.265 Range in us Cumulative Count 00:17:30.265 2.062 - 2.074: 2.2291% ( 303) 00:17:30.265 2.074 - 2.086: 31.7516% ( 4013) 00:17:30.265 2.086 - 2.098: 38.1299% ( 867) 00:17:30.265 2.098 - 2.110: 44.9055% ( 921) 00:17:30.265 2.110 - 2.121: 58.4198% ( 1837) 00:17:30.265 2.121 - 2.133: 60.4282% ( 273) 00:17:30.265 2.133 - 2.145: 66.6667% ( 848) 00:17:30.265 2.145 - 2.157: 78.2682% ( 1577) 00:17:30.265 2.157 - 2.169: 79.7469% ( 201) 00:17:30.265 2.169 - 2.181: 84.3817% ( 630) 00:17:30.265 2.181 - 2.193: 89.1783% ( 652) 00:17:30.265 2.193 - 2.204: 89.9949% ( 111) 00:17:30.265 2.204 - 2.216: 91.1866% ( 162) 00:17:30.265 2.216 - 2.228: 93.6438% ( 334) 00:17:30.265 2.228 - 2.240: 94.8429% ( 163) 00:17:30.265 2.240 - 2.252: 95.1961% ( 48) 00:17:30.265 2.252 - 2.264: 95.6007% ( 55) 00:17:30.265 2.264 - 2.276: 95.6963% ( 13) 00:17:30.265 2.276 - 2.287: 95.8434% ( 20) 00:17:30.265 2.287 - 2.299: 96.1671% ( 44) 00:17:30.265 2.299 - 2.311: 96.3437% ( 24) 00:17:30.265 2.311 - 2.323: 96.4026% ( 8) 00:17:30.265 2.323 - 2.335: 96.4393% ( 5) 00:17:30.265 2.335 - 2.347: 96.4982% ( 8) 00:17:30.265 2.347 - 2.359: 96.7263% ( 31) 00:17:30.265 2.359 - 2.370: 97.1309% ( 55) 00:17:30.265 2.370 - 2.382: 97.4399% ( 42) 00:17:30.266 2.382 - 2.394: 97.7121% ( 37) 00:17:30.266 2.394 - 2.406: 97.9990% ( 39) 00:17:30.266 2.406 - 2.418: 98.1093% ( 15) 00:17:30.266 2.418 - 2.430: 98.2491% ( 19) 00:17:30.266 2.430 - 2.441: 98.3668% ( 16) 00:17:30.266 2.441 - 2.453: 98.4477% ( 11) 00:17:30.266 2.453 - 2.465: 98.4772% ( 4) 00:17:30.266 2.465 - 2.477: 98.5066% ( 4) 00:17:30.266 2.477 - 2.489: 98.5360% ( 4) 00:17:30.266 2.489 - 2.501: 98.5507% ( 2) 00:17:30.266 2.501 - 2.513: 98.5654% ( 2) 00:17:30.266 2.513 - 2.524: 98.5728% ( 1) 00:17:30.266 2.524 - 2.536: 98.5802% ( 1) 00:17:30.266 2.536 - 2.548: 98.5875% ( 1) 00:17:30.266 2.548 - 2.560: 98.5949% ( 1) 00:17:30.266 2.607 - 2.619: 98.6022% ( 1) 00:17:30.266 2.619 - 2.631: 98.6096% ( 1) 00:17:30.266 2.655 - 2.667: 98.6169% ( 1) 00:17:30.266 2.714 - 2.726: 98.6243% ( 1) 00:17:30.266 2.809 - 2.821: 98.6390% ( 2) 00:17:30.266 3.390 - 3.413: 98.6464% ( 1) 00:17:30.266 3.413 - 3.437: 98.6537% ( 1) 00:17:30.266 3.437 - 3.461: 98.6611% ( 1) 00:17:30.266 3.461 - 3.484: 98.6905% ( 4) 00:17:30.266 3.484 - 3.508: 98.7052% ( 2) 00:17:30.266 3.508 - 3.532: 98.7126% ( 1) 00:17:30.266 3.532 - 3.556: 98.7346% ( 3) 00:17:30.266 3.556 - 3.579: 98.7494% ( 2) 00:17:30.266 3.579 - 3.603: 98.7714% ( 3) 00:17:30.266 3.627 - 3.650: 98.7788% ( 1) 00:17:30.266 3.650 - 3.674: 98.8009% ( 3) 00:17:30.266 3.745 - 3.769: 98.8082% ( 1) 00:17:30.266 3.769 - 3.793: 98.8156% ( 1) 00:17:30.266 3.816 - 3.840: 98.8229% ( 1) 00:17:30.266 3.840 - 3.864: 98.8303% ( 1) 00:17:30.266 3.864 - 3.887: 98.8376% ( 1) 00:17:30.266 3.887 - 3.911: 98.8524% ( 2) 00:17:30.266 3.911 - 3.935: 98.8597% ( 1) 00:17:30.266 3.959 - 3.982: 98.8671% ( 1) 00:17:30.266 4.101 - 4.124: 98.8744% ( 1) 00:17:30.266 5.096 - 5.120: 98.8818% ( 1) 00:17:30.266 5.476 - 5.499: 98.8891% ( 1) 00:17:30.266 5.570 - 5.594: 98.8965% ( 1) 00:17:30.266 5.618 - 5.641: 98.9038% ( 1) 00:17:30.266 5.665 - 5.689: 98.9112% ( 1) 00:17:30.266 5.807 - 5.831: 98.9186% ( 1) 00:17:30.266 6.044 - 6.068: 98.9259% ( 1) 00:17:30.266 6.210 - 6.258: 98.9333% ( 1) 00:17:30.266 6.447 - 6.495: 98.9406% ( 1) 00:17:30.266 6.684 - 6.732: 98.9480% ( 1) 00:17:30.266 6.732 - 6.779: 98.9553% ( 1) 00:17:30.266 6.827 - 6.874: 98.9627% ( 1) 00:17:30.266 6.921 - 6.969: 98.9701% ( 1) 00:17:30.266 7.016 - 7.064: 98.9774% ( 1) 00:17:30.266 7.111 - 7.159: 98.9848% ( 1) 00:17:30.266 7.159 - 7.206: 98.9921% ( 1) 00:17:30.266 7.301 - 7.348: 9[2024-07-23 06:13:23.592117] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:30.526 8.9995% ( 1) 00:17:30.526 8.012 - 8.059: 99.0068% ( 1) 00:17:30.526 8.249 - 8.296: 99.0142% ( 1) 00:17:30.526 8.581 - 8.628: 99.0216% ( 1) 00:17:30.526 10.193 - 10.240: 99.0289% ( 1) 00:17:30.526 15.644 - 15.739: 99.0363% ( 1) 00:17:30.526 15.739 - 15.834: 99.0510% ( 2) 00:17:30.526 15.834 - 15.929: 99.0657% ( 2) 00:17:30.526 15.929 - 16.024: 99.0878% ( 3) 00:17:30.526 16.024 - 16.119: 99.0951% ( 1) 00:17:30.526 16.119 - 16.213: 99.1025% ( 1) 00:17:30.526 16.213 - 16.308: 99.1172% ( 2) 00:17:30.526 16.308 - 16.403: 99.1245% ( 1) 00:17:30.526 16.403 - 16.498: 99.1760% ( 7) 00:17:30.526 16.498 - 16.593: 99.2423% ( 9) 00:17:30.526 16.593 - 16.687: 99.2717% ( 4) 00:17:30.526 16.687 - 16.782: 99.2864% ( 2) 00:17:30.526 16.782 - 16.877: 99.3011% ( 2) 00:17:30.526 16.877 - 16.972: 99.3232% ( 3) 00:17:30.526 16.972 - 17.067: 99.3379% ( 2) 00:17:30.526 17.067 - 17.161: 99.3673% ( 4) 00:17:30.526 17.256 - 17.351: 99.3820% ( 2) 00:17:30.526 17.446 - 17.541: 99.3894% ( 1) 00:17:30.526 17.541 - 17.636: 99.4115% ( 3) 00:17:30.526 17.636 - 17.730: 99.4188% ( 1) 00:17:30.526 18.110 - 18.204: 99.4262% ( 1) 00:17:30.526 18.489 - 18.584: 99.4409% ( 2) 00:17:30.526 24.652 - 24.841: 99.4482% ( 1) 00:17:30.526 2160.261 - 2172.397: 99.4556% ( 1) 00:17:30.526 3956.433 - 3980.705: 99.4630% ( 1) 00:17:30.526 3980.705 - 4004.978: 99.9559% ( 67) 00:17:30.526 4004.978 - 4029.250: 100.0000% ( 6) 00:17:30.526 00:17:30.526 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:30.526 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:30.526 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:30.526 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:30.526 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:30.817 [ 00:17:30.817 { 00:17:30.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:30.817 "subtype": "Discovery", 00:17:30.817 "listen_addresses": [], 00:17:30.817 "allow_any_host": true, 00:17:30.817 "hosts": [] 00:17:30.817 }, 00:17:30.817 { 00:17:30.817 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:30.817 "subtype": "NVMe", 00:17:30.817 "listen_addresses": [ 00:17:30.817 { 00:17:30.817 "trtype": "VFIOUSER", 00:17:30.817 "adrfam": "IPv4", 00:17:30.817 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:30.817 "trsvcid": "0" 00:17:30.817 } 00:17:30.817 ], 00:17:30.817 "allow_any_host": true, 00:17:30.817 "hosts": [], 00:17:30.817 "serial_number": "SPDK1", 00:17:30.817 "model_number": "SPDK bdev Controller", 00:17:30.817 "max_namespaces": 32, 00:17:30.817 "min_cntlid": 1, 00:17:30.817 "max_cntlid": 65519, 00:17:30.817 "namespaces": [ 00:17:30.817 { 00:17:30.817 "nsid": 1, 00:17:30.817 "bdev_name": "Malloc1", 00:17:30.817 "name": "Malloc1", 00:17:30.817 "nguid": "F5835EBABFB74245858A26A7FA2EAB2C", 00:17:30.817 "uuid": "f5835eba-bfb7-4245-858a-26a7fa2eab2c" 00:17:30.817 } 00:17:30.817 ] 00:17:30.817 }, 00:17:30.817 { 00:17:30.817 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:30.817 "subtype": "NVMe", 00:17:30.817 "listen_addresses": [ 00:17:30.817 { 00:17:30.817 "trtype": "VFIOUSER", 00:17:30.817 "adrfam": "IPv4", 00:17:30.817 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:30.817 "trsvcid": "0" 00:17:30.817 } 00:17:30.817 ], 00:17:30.817 "allow_any_host": true, 00:17:30.817 "hosts": [], 00:17:30.817 "serial_number": "SPDK2", 00:17:30.817 "model_number": "SPDK bdev Controller", 00:17:30.817 "max_namespaces": 32, 00:17:30.817 "min_cntlid": 1, 00:17:30.817 "max_cntlid": 65519, 00:17:30.817 "namespaces": [ 00:17:30.817 { 00:17:30.817 "nsid": 1, 00:17:30.817 "bdev_name": "Malloc2", 00:17:30.817 "name": "Malloc2", 00:17:30.817 "nguid": "974ED5866CA5490BBD5BFE4D99A31542", 00:17:30.817 "uuid": "974ed586-6ca5-490b-bd5b-fe4d99a31542" 00:17:30.817 } 00:17:30.817 ] 00:17:30.817 } 00:17:30.817 ] 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1733776 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:30.817 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:30.817 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.817 [2024-07-23 06:13:24.093564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:31.075 Malloc3 00:17:31.075 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:31.333 [2024-07-23 06:13:24.487530] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:31.333 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:31.333 Asynchronous Event Request test 00:17:31.333 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:31.333 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:31.333 Registering asynchronous event callbacks... 00:17:31.333 Starting namespace attribute notice tests for all controllers... 00:17:31.333 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:31.333 aer_cb - Changed Namespace 00:17:31.333 Cleaning up... 00:17:31.594 [ 00:17:31.594 { 00:17:31.594 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:31.594 "subtype": "Discovery", 00:17:31.594 "listen_addresses": [], 00:17:31.594 "allow_any_host": true, 00:17:31.594 "hosts": [] 00:17:31.594 }, 00:17:31.594 { 00:17:31.594 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:31.594 "subtype": "NVMe", 00:17:31.594 "listen_addresses": [ 00:17:31.594 { 00:17:31.594 "trtype": "VFIOUSER", 00:17:31.594 "adrfam": "IPv4", 00:17:31.594 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:31.594 "trsvcid": "0" 00:17:31.594 } 00:17:31.594 ], 00:17:31.594 "allow_any_host": true, 00:17:31.594 "hosts": [], 00:17:31.594 "serial_number": "SPDK1", 00:17:31.594 "model_number": "SPDK bdev Controller", 00:17:31.594 "max_namespaces": 32, 00:17:31.594 "min_cntlid": 1, 00:17:31.594 "max_cntlid": 65519, 00:17:31.594 "namespaces": [ 00:17:31.594 { 00:17:31.594 "nsid": 1, 00:17:31.594 "bdev_name": "Malloc1", 00:17:31.594 "name": "Malloc1", 00:17:31.594 "nguid": "F5835EBABFB74245858A26A7FA2EAB2C", 00:17:31.594 "uuid": "f5835eba-bfb7-4245-858a-26a7fa2eab2c" 00:17:31.594 }, 00:17:31.594 { 00:17:31.594 "nsid": 2, 00:17:31.594 "bdev_name": "Malloc3", 00:17:31.594 "name": "Malloc3", 00:17:31.594 "nguid": "1532D27183EA4831963FD0FA37DA3A48", 00:17:31.594 "uuid": "1532d271-83ea-4831-963f-d0fa37da3a48" 00:17:31.594 } 00:17:31.594 ] 00:17:31.594 }, 00:17:31.594 { 00:17:31.594 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:31.594 "subtype": "NVMe", 00:17:31.594 "listen_addresses": [ 00:17:31.594 { 00:17:31.594 "trtype": "VFIOUSER", 00:17:31.594 "adrfam": "IPv4", 00:17:31.594 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:31.594 "trsvcid": "0" 00:17:31.594 } 00:17:31.594 ], 00:17:31.594 "allow_any_host": true, 00:17:31.594 "hosts": [], 00:17:31.594 "serial_number": "SPDK2", 00:17:31.594 "model_number": "SPDK bdev Controller", 00:17:31.594 "max_namespaces": 32, 00:17:31.594 "min_cntlid": 1, 00:17:31.594 "max_cntlid": 65519, 00:17:31.594 "namespaces": [ 00:17:31.594 { 00:17:31.594 "nsid": 1, 00:17:31.594 "bdev_name": "Malloc2", 00:17:31.594 "name": "Malloc2", 00:17:31.594 "nguid": "974ED5866CA5490BBD5BFE4D99A31542", 00:17:31.594 "uuid": "974ed586-6ca5-490b-bd5b-fe4d99a31542" 00:17:31.594 } 00:17:31.594 ] 00:17:31.594 } 00:17:31.594 ] 00:17:31.594 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1733776 00:17:31.594 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:31.595 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:31.595 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:31.595 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:31.595 [2024-07-23 06:13:24.776309] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:31.595 [2024-07-23 06:13:24.776346] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733873 ] 00:17:31.595 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.595 [2024-07-23 06:13:24.794174] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:31.595 [2024-07-23 06:13:24.811647] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:31.595 [2024-07-23 06:13:24.819947] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:31.595 [2024-07-23 06:13:24.819978] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8a1a492000 00:17:31.595 [2024-07-23 06:13:24.820946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.595 [2024-07-23 06:13:24.821943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.595 [2024-07-23 06:13:24.822970] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.595 [2024-07-23 06:13:24.823967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:31.595 [2024-07-23 06:13:24.824966] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:31.595 [2024-07-23 06:13:24.825980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.595 [2024-07-23 06:13:24.826982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:31.595 [2024-07-23 06:13:24.827990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.595 [2024-07-23 06:13:24.828998] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:31.595 [2024-07-23 06:13:24.829023] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8a19254000 00:17:31.595 [2024-07-23 06:13:24.830135] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:31.595 [2024-07-23 06:13:24.846213] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:31.595 [2024-07-23 06:13:24.846245] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:31.595 [2024-07-23 06:13:24.851351] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:31.595 [2024-07-23 06:13:24.851409] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:31.595 [2024-07-23 06:13:24.851493] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:31.595 [2024-07-23 06:13:24.851516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:31.595 [2024-07-23 06:13:24.851526] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:31.595 [2024-07-23 06:13:24.852356] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:31.595 [2024-07-23 06:13:24.852381] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:31.595 [2024-07-23 06:13:24.852394] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:31.595 [2024-07-23 06:13:24.853363] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:31.595 [2024-07-23 06:13:24.853386] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:31.595 [2024-07-23 06:13:24.853399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:31.595 [2024-07-23 06:13:24.854368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:31.595 [2024-07-23 06:13:24.854387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:31.595 [2024-07-23 06:13:24.855368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:31.595 [2024-07-23 06:13:24.855388] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:31.595 [2024-07-23 06:13:24.855396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:31.595 [2024-07-23 06:13:24.855407] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:31.595 [2024-07-23 06:13:24.855516] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:31.595 [2024-07-23 06:13:24.855524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:31.595 [2024-07-23 06:13:24.855532] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:31.595 [2024-07-23 06:13:24.856375] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:31.595 [2024-07-23 06:13:24.857379] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:31.595 [2024-07-23 06:13:24.858381] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:31.595 [2024-07-23 06:13:24.859377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.595 [2024-07-23 06:13:24.859454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:31.595 [2024-07-23 06:13:24.860389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:31.595 [2024-07-23 06:13:24.860408] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:31.595 [2024-07-23 06:13:24.860417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:31.595 [2024-07-23 06:13:24.860440] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:31.595 [2024-07-23 06:13:24.860453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:31.595 [2024-07-23 06:13:24.860472] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:31.595 [2024-07-23 06:13:24.860481] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:31.595 [2024-07-23 06:13:24.860487] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.595 [2024-07-23 06:13:24.860504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:31.595 [2024-07-23 06:13:24.864631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:31.595 [2024-07-23 06:13:24.864656] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:31.595 [2024-07-23 06:13:24.864666] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:31.595 [2024-07-23 06:13:24.864674] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:31.595 [2024-07-23 06:13:24.864681] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:31.595 [2024-07-23 06:13:24.864689] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:31.595 [2024-07-23 06:13:24.864697] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:31.595 [2024-07-23 06:13:24.864704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:31.595 [2024-07-23 06:13:24.864716] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:31.595 [2024-07-23 06:13:24.864731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:31.595 [2024-07-23 06:13:24.872636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:31.595 [2024-07-23 06:13:24.872664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.595 [2024-07-23 06:13:24.872679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.595 [2024-07-23 06:13:24.872694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.595 [2024-07-23 06:13:24.872707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.595 [2024-07-23 06:13:24.872715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:31.595 [2024-07-23 06:13:24.872730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:31.595 [2024-07-23 06:13:24.872745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:31.595 [2024-07-23 06:13:24.880624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:31.595 [2024-07-23 06:13:24.880641] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:31.595 [2024-07-23 06:13:24.880651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:31.595 [2024-07-23 06:13:24.880662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.880672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.880686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:31.596 [2024-07-23 06:13:24.888636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:31.596 [2024-07-23 06:13:24.888710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.888728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.888741] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:31.596 [2024-07-23 06:13:24.888750] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:31.596 [2024-07-23 06:13:24.888756] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.596 [2024-07-23 06:13:24.888765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:31.596 [2024-07-23 06:13:24.896626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:31.596 [2024-07-23 06:13:24.896661] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:31.596 [2024-07-23 06:13:24.896691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.896706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.896718] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:31.596 [2024-07-23 06:13:24.896727] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:31.596 [2024-07-23 06:13:24.896733] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.596 [2024-07-23 06:13:24.896742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:31.596 [2024-07-23 06:13:24.904623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:31.596 [2024-07-23 06:13:24.904650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.904667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.904680] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:31.596 [2024-07-23 06:13:24.904688] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:31.596 [2024-07-23 06:13:24.904694] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.596 [2024-07-23 06:13:24.904704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:31.596 [2024-07-23 06:13:24.912625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:31.596 [2024-07-23 06:13:24.912647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.912659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.912674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.912685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.912693] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.912701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.912709] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:31.596 [2024-07-23 06:13:24.912716] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:31.596 [2024-07-23 06:13:24.912724] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:31.596 [2024-07-23 06:13:24.912750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:31.596 [2024-07-23 06:13:24.920622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:31.596 [2024-07-23 06:13:24.920650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:31.596 [2024-07-23 06:13:24.928623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:31.596 [2024-07-23 06:13:24.928648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:31.596 [2024-07-23 06:13:24.936638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:31.596 [2024-07-23 06:13:24.936665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:31.857 [2024-07-23 06:13:24.944640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:31.857 [2024-07-23 06:13:24.944677] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:31.857 [2024-07-23 06:13:24.944689] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:31.857 [2024-07-23 06:13:24.944695] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:31.857 [2024-07-23 06:13:24.944701] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:31.857 [2024-07-23 06:13:24.944707] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:31.857 [2024-07-23 06:13:24.944717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:31.857 [2024-07-23 06:13:24.944729] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:31.857 [2024-07-23 06:13:24.944737] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:31.857 [2024-07-23 06:13:24.944743] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.857 [2024-07-23 06:13:24.944752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:31.857 [2024-07-23 06:13:24.944763] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:31.857 [2024-07-23 06:13:24.944771] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:31.857 [2024-07-23 06:13:24.944777] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.857 [2024-07-23 06:13:24.944785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:31.858 [2024-07-23 06:13:24.944798] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:31.858 [2024-07-23 06:13:24.944806] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:31.858 [2024-07-23 06:13:24.944812] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.858 [2024-07-23 06:13:24.944820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:31.858 [2024-07-23 06:13:24.952627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:31.858 [2024-07-23 06:13:24.952655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:31.858 [2024-07-23 06:13:24.952672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:31.858 [2024-07-23 06:13:24.952684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:31.858 ===================================================== 00:17:31.858 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:31.858 ===================================================== 00:17:31.858 Controller Capabilities/Features 00:17:31.858 ================================ 00:17:31.858 Vendor ID: 4e58 00:17:31.858 Subsystem Vendor ID: 4e58 00:17:31.858 Serial Number: SPDK2 00:17:31.858 Model Number: SPDK bdev Controller 00:17:31.858 Firmware Version: 24.09 00:17:31.858 Recommended Arb Burst: 6 00:17:31.858 IEEE OUI Identifier: 8d 6b 50 00:17:31.858 Multi-path I/O 00:17:31.858 May have multiple subsystem ports: Yes 00:17:31.858 May have multiple controllers: Yes 00:17:31.858 Associated with SR-IOV VF: No 00:17:31.858 Max Data Transfer Size: 131072 00:17:31.858 Max Number of Namespaces: 32 00:17:31.858 Max Number of I/O Queues: 127 00:17:31.858 NVMe Specification Version (VS): 1.3 00:17:31.858 NVMe Specification Version (Identify): 1.3 00:17:31.858 Maximum Queue Entries: 256 00:17:31.858 Contiguous Queues Required: Yes 00:17:31.858 Arbitration Mechanisms Supported 00:17:31.858 Weighted Round Robin: Not Supported 00:17:31.858 Vendor Specific: Not Supported 00:17:31.858 Reset Timeout: 15000 ms 00:17:31.858 Doorbell Stride: 4 bytes 00:17:31.858 NVM Subsystem Reset: Not Supported 00:17:31.858 Command Sets Supported 00:17:31.858 NVM Command Set: Supported 00:17:31.858 Boot Partition: Not Supported 00:17:31.858 Memory Page Size Minimum: 4096 bytes 00:17:31.858 Memory Page Size Maximum: 4096 bytes 00:17:31.858 Persistent Memory Region: Not Supported 00:17:31.858 Optional Asynchronous Events Supported 00:17:31.858 Namespace Attribute Notices: Supported 00:17:31.858 Firmware Activation Notices: Not Supported 00:17:31.858 ANA Change Notices: Not Supported 00:17:31.858 PLE Aggregate Log Change Notices: Not Supported 00:17:31.858 LBA Status Info Alert Notices: Not Supported 00:17:31.858 EGE Aggregate Log Change Notices: Not Supported 00:17:31.858 Normal NVM Subsystem Shutdown event: Not Supported 00:17:31.858 Zone Descriptor Change Notices: Not Supported 00:17:31.858 Discovery Log Change Notices: Not Supported 00:17:31.858 Controller Attributes 00:17:31.858 128-bit Host Identifier: Supported 00:17:31.858 Non-Operational Permissive Mode: Not Supported 00:17:31.858 NVM Sets: Not Supported 00:17:31.858 Read Recovery Levels: Not Supported 00:17:31.858 Endurance Groups: Not Supported 00:17:31.858 Predictable Latency Mode: Not Supported 00:17:31.858 Traffic Based Keep ALive: Not Supported 00:17:31.858 Namespace Granularity: Not Supported 00:17:31.858 SQ Associations: Not Supported 00:17:31.858 UUID List: Not Supported 00:17:31.858 Multi-Domain Subsystem: Not Supported 00:17:31.858 Fixed Capacity Management: Not Supported 00:17:31.858 Variable Capacity Management: Not Supported 00:17:31.858 Delete Endurance Group: Not Supported 00:17:31.858 Delete NVM Set: Not Supported 00:17:31.858 Extended LBA Formats Supported: Not Supported 00:17:31.858 Flexible Data Placement Supported: Not Supported 00:17:31.858 00:17:31.858 Controller Memory Buffer Support 00:17:31.858 ================================ 00:17:31.858 Supported: No 00:17:31.858 00:17:31.858 Persistent Memory Region Support 00:17:31.858 ================================ 00:17:31.858 Supported: No 00:17:31.858 00:17:31.858 Admin Command Set Attributes 00:17:31.858 ============================ 00:17:31.858 Security Send/Receive: Not Supported 00:17:31.858 Format NVM: Not Supported 00:17:31.858 Firmware Activate/Download: Not Supported 00:17:31.858 Namespace Management: Not Supported 00:17:31.858 Device Self-Test: Not Supported 00:17:31.858 Directives: Not Supported 00:17:31.858 NVMe-MI: Not Supported 00:17:31.858 Virtualization Management: Not Supported 00:17:31.858 Doorbell Buffer Config: Not Supported 00:17:31.858 Get LBA Status Capability: Not Supported 00:17:31.858 Command & Feature Lockdown Capability: Not Supported 00:17:31.858 Abort Command Limit: 4 00:17:31.858 Async Event Request Limit: 4 00:17:31.858 Number of Firmware Slots: N/A 00:17:31.858 Firmware Slot 1 Read-Only: N/A 00:17:31.858 Firmware Activation Without Reset: N/A 00:17:31.858 Multiple Update Detection Support: N/A 00:17:31.858 Firmware Update Granularity: No Information Provided 00:17:31.858 Per-Namespace SMART Log: No 00:17:31.858 Asymmetric Namespace Access Log Page: Not Supported 00:17:31.858 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:31.858 Command Effects Log Page: Supported 00:17:31.858 Get Log Page Extended Data: Supported 00:17:31.858 Telemetry Log Pages: Not Supported 00:17:31.858 Persistent Event Log Pages: Not Supported 00:17:31.858 Supported Log Pages Log Page: May Support 00:17:31.858 Commands Supported & Effects Log Page: Not Supported 00:17:31.858 Feature Identifiers & Effects Log Page:May Support 00:17:31.858 NVMe-MI Commands & Effects Log Page: May Support 00:17:31.858 Data Area 4 for Telemetry Log: Not Supported 00:17:31.858 Error Log Page Entries Supported: 128 00:17:31.858 Keep Alive: Supported 00:17:31.858 Keep Alive Granularity: 10000 ms 00:17:31.858 00:17:31.858 NVM Command Set Attributes 00:17:31.858 ========================== 00:17:31.858 Submission Queue Entry Size 00:17:31.858 Max: 64 00:17:31.858 Min: 64 00:17:31.858 Completion Queue Entry Size 00:17:31.858 Max: 16 00:17:31.858 Min: 16 00:17:31.858 Number of Namespaces: 32 00:17:31.858 Compare Command: Supported 00:17:31.858 Write Uncorrectable Command: Not Supported 00:17:31.858 Dataset Management Command: Supported 00:17:31.858 Write Zeroes Command: Supported 00:17:31.858 Set Features Save Field: Not Supported 00:17:31.858 Reservations: Not Supported 00:17:31.858 Timestamp: Not Supported 00:17:31.858 Copy: Supported 00:17:31.858 Volatile Write Cache: Present 00:17:31.858 Atomic Write Unit (Normal): 1 00:17:31.858 Atomic Write Unit (PFail): 1 00:17:31.858 Atomic Compare & Write Unit: 1 00:17:31.858 Fused Compare & Write: Supported 00:17:31.858 Scatter-Gather List 00:17:31.858 SGL Command Set: Supported (Dword aligned) 00:17:31.858 SGL Keyed: Not Supported 00:17:31.858 SGL Bit Bucket Descriptor: Not Supported 00:17:31.858 SGL Metadata Pointer: Not Supported 00:17:31.858 Oversized SGL: Not Supported 00:17:31.858 SGL Metadata Address: Not Supported 00:17:31.858 SGL Offset: Not Supported 00:17:31.858 Transport SGL Data Block: Not Supported 00:17:31.858 Replay Protected Memory Block: Not Supported 00:17:31.858 00:17:31.858 Firmware Slot Information 00:17:31.858 ========================= 00:17:31.858 Active slot: 1 00:17:31.858 Slot 1 Firmware Revision: 24.09 00:17:31.858 00:17:31.858 00:17:31.858 Commands Supported and Effects 00:17:31.858 ============================== 00:17:31.858 Admin Commands 00:17:31.858 -------------- 00:17:31.858 Get Log Page (02h): Supported 00:17:31.858 Identify (06h): Supported 00:17:31.858 Abort (08h): Supported 00:17:31.858 Set Features (09h): Supported 00:17:31.858 Get Features (0Ah): Supported 00:17:31.858 Asynchronous Event Request (0Ch): Supported 00:17:31.858 Keep Alive (18h): Supported 00:17:31.858 I/O Commands 00:17:31.858 ------------ 00:17:31.858 Flush (00h): Supported LBA-Change 00:17:31.858 Write (01h): Supported LBA-Change 00:17:31.858 Read (02h): Supported 00:17:31.858 Compare (05h): Supported 00:17:31.858 Write Zeroes (08h): Supported LBA-Change 00:17:31.858 Dataset Management (09h): Supported LBA-Change 00:17:31.858 Copy (19h): Supported LBA-Change 00:17:31.858 00:17:31.858 Error Log 00:17:31.858 ========= 00:17:31.858 00:17:31.858 Arbitration 00:17:31.858 =========== 00:17:31.858 Arbitration Burst: 1 00:17:31.858 00:17:31.858 Power Management 00:17:31.858 ================ 00:17:31.858 Number of Power States: 1 00:17:31.859 Current Power State: Power State #0 00:17:31.859 Power State #0: 00:17:31.859 Max Power: 0.00 W 00:17:31.859 Non-Operational State: Operational 00:17:31.859 Entry Latency: Not Reported 00:17:31.859 Exit Latency: Not Reported 00:17:31.859 Relative Read Throughput: 0 00:17:31.859 Relative Read Latency: 0 00:17:31.859 Relative Write Throughput: 0 00:17:31.859 Relative Write Latency: 0 00:17:31.859 Idle Power: Not Reported 00:17:31.859 Active Power: Not Reported 00:17:31.859 Non-Operational Permissive Mode: Not Supported 00:17:31.859 00:17:31.859 Health Information 00:17:31.859 ================== 00:17:31.859 Critical Warnings: 00:17:31.859 Available Spare Space: OK 00:17:31.859 Temperature: OK 00:17:31.859 Device Reliability: OK 00:17:31.859 Read Only: No 00:17:31.859 Volatile Memory Backup: OK 00:17:31.859 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:31.859 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:31.859 Available Spare: 0% 00:17:31.859 Available Sp[2024-07-23 06:13:24.952808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:31.859 [2024-07-23 06:13:24.960638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:31.859 [2024-07-23 06:13:24.960690] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:31.859 [2024-07-23 06:13:24.960709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.859 [2024-07-23 06:13:24.960720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.859 [2024-07-23 06:13:24.960730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.859 [2024-07-23 06:13:24.960746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.859 [2024-07-23 06:13:24.960827] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:31.859 [2024-07-23 06:13:24.960848] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:31.859 [2024-07-23 06:13:24.961822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.859 [2024-07-23 06:13:24.964633] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:31.859 [2024-07-23 06:13:24.964649] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:31.859 [2024-07-23 06:13:24.964848] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:31.859 [2024-07-23 06:13:24.964871] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:31.859 [2024-07-23 06:13:24.964936] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:31.859 [2024-07-23 06:13:24.966104] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:31.859 are Threshold: 0% 00:17:31.859 Life Percentage Used: 0% 00:17:31.859 Data Units Read: 0 00:17:31.859 Data Units Written: 0 00:17:31.859 Host Read Commands: 0 00:17:31.859 Host Write Commands: 0 00:17:31.859 Controller Busy Time: 0 minutes 00:17:31.859 Power Cycles: 0 00:17:31.859 Power On Hours: 0 hours 00:17:31.859 Unsafe Shutdowns: 0 00:17:31.859 Unrecoverable Media Errors: 0 00:17:31.859 Lifetime Error Log Entries: 0 00:17:31.859 Warning Temperature Time: 0 minutes 00:17:31.859 Critical Temperature Time: 0 minutes 00:17:31.859 00:17:31.859 Number of Queues 00:17:31.859 ================ 00:17:31.859 Number of I/O Submission Queues: 127 00:17:31.859 Number of I/O Completion Queues: 127 00:17:31.859 00:17:31.859 Active Namespaces 00:17:31.859 ================= 00:17:31.859 Namespace ID:1 00:17:31.859 Error Recovery Timeout: Unlimited 00:17:31.859 Command Set Identifier: NVM (00h) 00:17:31.859 Deallocate: Supported 00:17:31.859 Deallocated/Unwritten Error: Not Supported 00:17:31.859 Deallocated Read Value: Unknown 00:17:31.859 Deallocate in Write Zeroes: Not Supported 00:17:31.859 Deallocated Guard Field: 0xFFFF 00:17:31.859 Flush: Supported 00:17:31.859 Reservation: Supported 00:17:31.859 Namespace Sharing Capabilities: Multiple Controllers 00:17:31.859 Size (in LBAs): 131072 (0GiB) 00:17:31.859 Capacity (in LBAs): 131072 (0GiB) 00:17:31.859 Utilization (in LBAs): 131072 (0GiB) 00:17:31.859 NGUID: 974ED5866CA5490BBD5BFE4D99A31542 00:17:31.859 UUID: 974ed586-6ca5-490b-bd5b-fe4d99a31542 00:17:31.859 Thin Provisioning: Not Supported 00:17:31.859 Per-NS Atomic Units: Yes 00:17:31.859 Atomic Boundary Size (Normal): 0 00:17:31.859 Atomic Boundary Size (PFail): 0 00:17:31.859 Atomic Boundary Offset: 0 00:17:31.859 Maximum Single Source Range Length: 65535 00:17:31.859 Maximum Copy Length: 65535 00:17:31.859 Maximum Source Range Count: 1 00:17:31.859 NGUID/EUI64 Never Reused: No 00:17:31.859 Namespace Write Protected: No 00:17:31.859 Number of LBA Formats: 1 00:17:31.859 Current LBA Format: LBA Format #00 00:17:31.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:31.859 00:17:31.859 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:31.859 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.859 [2024-07-23 06:13:25.193354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:37.135 Initializing NVMe Controllers 00:17:37.135 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:37.135 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:37.135 Initialization complete. Launching workers. 00:17:37.135 ======================================================== 00:17:37.135 Latency(us) 00:17:37.135 Device Information : IOPS MiB/s Average min max 00:17:37.135 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36325.61 141.90 3522.99 1154.01 8535.41 00:17:37.135 ======================================================== 00:17:37.135 Total : 36325.61 141.90 3522.99 1154.01 8535.41 00:17:37.135 00:17:37.135 [2024-07-23 06:13:30.299009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:37.135 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:37.135 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.394 [2024-07-23 06:13:30.540702] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:42.663 Initializing NVMe Controllers 00:17:42.664 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.664 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:42.664 Initialization complete. Launching workers. 00:17:42.664 ======================================================== 00:17:42.664 Latency(us) 00:17:42.664 Device Information : IOPS MiB/s Average min max 00:17:42.664 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32872.66 128.41 3893.36 1202.11 7762.09 00:17:42.664 ======================================================== 00:17:42.664 Total : 32872.66 128.41 3893.36 1202.11 7762.09 00:17:42.664 00:17:42.664 [2024-07-23 06:13:35.561680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:42.664 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:42.664 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.664 [2024-07-23 06:13:35.768521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.946 [2024-07-23 06:13:40.900780] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.946 Initializing NVMe Controllers 00:17:47.946 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:47.946 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:47.946 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:47.946 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:47.946 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:47.946 Initialization complete. Launching workers. 00:17:47.946 Starting thread on core 2 00:17:47.946 Starting thread on core 3 00:17:47.946 Starting thread on core 1 00:17:47.946 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:47.946 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.946 [2024-07-23 06:13:41.211128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:51.238 [2024-07-23 06:13:44.444313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:51.238 Initializing NVMe Controllers 00:17:51.238 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:51.238 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:51.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:51.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:51.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:51.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:51.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:51.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:51.238 Initialization complete. Launching workers. 00:17:51.238 Starting thread on core 1 with urgent priority queue 00:17:51.238 Starting thread on core 2 with urgent priority queue 00:17:51.238 Starting thread on core 3 with urgent priority queue 00:17:51.238 Starting thread on core 0 with urgent priority queue 00:17:51.238 SPDK bdev Controller (SPDK2 ) core 0: 4637.33 IO/s 21.56 secs/100000 ios 00:17:51.238 SPDK bdev Controller (SPDK2 ) core 1: 4845.00 IO/s 20.64 secs/100000 ios 00:17:51.238 SPDK bdev Controller (SPDK2 ) core 2: 5696.00 IO/s 17.56 secs/100000 ios 00:17:51.238 SPDK bdev Controller (SPDK2 ) core 3: 4036.00 IO/s 24.78 secs/100000 ios 00:17:51.238 ======================================================== 00:17:51.238 00:17:51.238 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:51.238 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.496 [2024-07-23 06:13:44.750143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:51.496 Initializing NVMe Controllers 00:17:51.496 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:51.496 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:51.496 Namespace ID: 1 size: 0GB 00:17:51.496 Initialization complete. 00:17:51.496 INFO: using host memory buffer for IO 00:17:51.496 Hello world! 00:17:51.496 [2024-07-23 06:13:44.759209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:51.496 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:51.754 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.755 [2024-07-23 06:13:45.055095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:53.133 Initializing NVMe Controllers 00:17:53.133 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:53.133 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:53.133 Initialization complete. Launching workers. 00:17:53.133 submit (in ns) avg, min, max = 8368.0, 3477.8, 4016663.3 00:17:53.133 complete (in ns) avg, min, max = 24162.6, 2065.6, 4014096.7 00:17:53.133 00:17:53.133 Submit histogram 00:17:53.133 ================ 00:17:53.133 Range in us Cumulative Count 00:17:53.133 3.461 - 3.484: 0.0074% ( 1) 00:17:53.133 3.508 - 3.532: 0.8968% ( 120) 00:17:53.133 3.532 - 3.556: 1.9343% ( 140) 00:17:53.133 3.556 - 3.579: 5.6400% ( 500) 00:17:53.133 3.579 - 3.603: 12.1915% ( 884) 00:17:53.133 3.603 - 3.627: 21.0109% ( 1190) 00:17:53.133 3.627 - 3.650: 30.6529% ( 1301) 00:17:53.133 3.650 - 3.674: 38.5089% ( 1060) 00:17:53.133 3.674 - 3.698: 46.1276% ( 1028) 00:17:53.133 3.698 - 3.721: 53.0868% ( 939) 00:17:53.133 3.721 - 3.745: 58.0375% ( 668) 00:17:53.133 3.745 - 3.769: 61.6245% ( 484) 00:17:53.133 3.769 - 3.793: 65.0189% ( 458) 00:17:53.133 3.793 - 3.816: 68.2131% ( 431) 00:17:53.133 3.816 - 3.840: 71.7038% ( 471) 00:17:53.133 3.840 - 3.864: 75.9801% ( 577) 00:17:53.133 3.864 - 3.887: 79.9229% ( 532) 00:17:53.133 3.887 - 3.911: 83.7323% ( 514) 00:17:53.133 3.911 - 3.935: 86.2744% ( 343) 00:17:53.133 3.935 - 3.959: 88.2754% ( 270) 00:17:53.133 3.959 - 3.982: 89.7354% ( 197) 00:17:53.133 3.982 - 4.006: 91.1732% ( 194) 00:17:53.133 4.006 - 4.030: 92.1737% ( 135) 00:17:53.133 4.030 - 4.053: 93.3002% ( 152) 00:17:53.133 4.053 - 4.077: 94.1748% ( 118) 00:17:53.133 4.077 - 4.101: 94.8714% ( 94) 00:17:53.133 4.101 - 4.124: 95.5088% ( 86) 00:17:53.133 4.124 - 4.148: 95.9609% ( 61) 00:17:53.133 4.148 - 4.172: 96.2944% ( 45) 00:17:53.133 4.172 - 4.196: 96.4500% ( 21) 00:17:53.133 4.196 - 4.219: 96.6872% ( 32) 00:17:53.133 4.219 - 4.243: 96.7761% ( 12) 00:17:53.133 4.243 - 4.267: 96.8650% ( 12) 00:17:53.133 4.267 - 4.290: 96.9762% ( 15) 00:17:53.133 4.290 - 4.314: 97.0651% ( 12) 00:17:53.133 4.314 - 4.338: 97.1837% ( 16) 00:17:53.133 4.338 - 4.361: 97.2430% ( 8) 00:17:53.133 4.361 - 4.385: 97.2801% ( 5) 00:17:53.133 4.385 - 4.409: 97.3245% ( 6) 00:17:53.133 4.409 - 4.433: 97.3690% ( 6) 00:17:53.133 4.433 - 4.456: 97.3987% ( 4) 00:17:53.133 4.456 - 4.480: 97.4209% ( 3) 00:17:53.133 4.480 - 4.504: 97.4357% ( 2) 00:17:53.133 4.527 - 4.551: 97.4431% ( 1) 00:17:53.133 4.551 - 4.575: 97.4505% ( 1) 00:17:53.133 4.599 - 4.622: 97.4654% ( 2) 00:17:53.133 4.670 - 4.693: 97.4728% ( 1) 00:17:53.133 4.693 - 4.717: 97.4876% ( 2) 00:17:53.133 4.717 - 4.741: 97.5098% ( 3) 00:17:53.133 4.741 - 4.764: 97.5246% ( 2) 00:17:53.133 4.764 - 4.788: 97.5543% ( 4) 00:17:53.133 4.788 - 4.812: 97.5691% ( 2) 00:17:53.133 4.812 - 4.836: 97.6136% ( 6) 00:17:53.133 4.836 - 4.859: 97.6506% ( 5) 00:17:53.133 4.859 - 4.883: 97.6877% ( 5) 00:17:53.133 4.883 - 4.907: 97.7025% ( 2) 00:17:53.133 4.907 - 4.930: 97.7618% ( 8) 00:17:53.133 4.930 - 4.954: 97.7840% ( 3) 00:17:53.133 4.954 - 4.978: 97.8285% ( 6) 00:17:53.133 4.978 - 5.001: 97.8878% ( 8) 00:17:53.133 5.001 - 5.025: 97.9397% ( 7) 00:17:53.133 5.025 - 5.049: 98.0212% ( 11) 00:17:53.133 5.049 - 5.073: 98.0508% ( 4) 00:17:53.133 5.073 - 5.096: 98.0657% ( 2) 00:17:53.133 5.096 - 5.120: 98.0805% ( 2) 00:17:53.133 5.120 - 5.144: 98.0953% ( 2) 00:17:53.133 5.144 - 5.167: 98.1324% ( 5) 00:17:53.133 5.167 - 5.191: 98.1620% ( 4) 00:17:53.133 5.191 - 5.215: 98.1768% ( 2) 00:17:53.133 5.215 - 5.239: 98.1917% ( 2) 00:17:53.133 5.262 - 5.286: 98.2065% ( 2) 00:17:53.133 5.286 - 5.310: 98.2287% ( 3) 00:17:53.133 5.310 - 5.333: 98.2509% ( 3) 00:17:53.133 5.333 - 5.357: 98.2584% ( 1) 00:17:53.133 5.357 - 5.381: 98.2658% ( 1) 00:17:53.133 5.476 - 5.499: 98.2732% ( 1) 00:17:53.133 5.499 - 5.523: 98.2806% ( 1) 00:17:53.133 5.570 - 5.594: 98.2880% ( 1) 00:17:53.133 5.641 - 5.665: 98.2954% ( 1) 00:17:53.133 5.807 - 5.831: 98.3028% ( 1) 00:17:53.133 5.902 - 5.926: 98.3102% ( 1) 00:17:53.133 5.926 - 5.950: 98.3176% ( 1) 00:17:53.133 5.950 - 5.973: 98.3251% ( 1) 00:17:53.133 6.021 - 6.044: 98.3325% ( 1) 00:17:53.133 6.044 - 6.068: 98.3399% ( 1) 00:17:53.133 6.116 - 6.163: 98.3547% ( 2) 00:17:53.133 6.163 - 6.210: 98.3695% ( 2) 00:17:53.133 6.305 - 6.353: 98.3769% ( 1) 00:17:53.133 6.400 - 6.447: 98.3843% ( 1) 00:17:53.133 6.495 - 6.542: 98.3992% ( 2) 00:17:53.133 6.542 - 6.590: 98.4066% ( 1) 00:17:53.133 6.732 - 6.779: 98.4214% ( 2) 00:17:53.133 6.874 - 6.921: 98.4288% ( 1) 00:17:53.133 6.921 - 6.969: 98.4362% ( 1) 00:17:53.133 7.064 - 7.111: 98.4436% ( 1) 00:17:53.133 7.111 - 7.159: 98.4585% ( 2) 00:17:53.133 7.159 - 7.206: 98.4659% ( 1) 00:17:53.133 7.206 - 7.253: 98.4733% ( 1) 00:17:53.133 7.253 - 7.301: 98.4881% ( 2) 00:17:53.133 7.301 - 7.348: 98.4955% ( 1) 00:17:53.133 7.396 - 7.443: 98.5029% ( 1) 00:17:53.133 7.443 - 7.490: 98.5252% ( 3) 00:17:53.133 7.538 - 7.585: 98.5400% ( 2) 00:17:53.133 7.585 - 7.633: 98.5548% ( 2) 00:17:53.133 7.633 - 7.680: 98.5696% ( 2) 00:17:53.133 7.775 - 7.822: 98.5845% ( 2) 00:17:53.133 7.822 - 7.870: 98.5919% ( 1) 00:17:53.133 7.917 - 7.964: 98.5993% ( 1) 00:17:53.133 8.012 - 8.059: 98.6141% ( 2) 00:17:53.133 8.154 - 8.201: 98.6215% ( 1) 00:17:53.133 8.201 - 8.249: 98.6289% ( 1) 00:17:53.133 8.391 - 8.439: 98.6437% ( 2) 00:17:53.133 8.581 - 8.628: 98.6586% ( 2) 00:17:53.133 8.676 - 8.723: 98.6734% ( 2) 00:17:53.133 8.723 - 8.770: 98.6882% ( 2) 00:17:53.133 8.770 - 8.818: 98.7030% ( 2) 00:17:53.133 8.818 - 8.865: 98.7104% ( 1) 00:17:53.133 8.913 - 8.960: 98.7253% ( 2) 00:17:53.134 8.960 - 9.007: 98.7475% ( 3) 00:17:53.134 9.102 - 9.150: 98.7549% ( 1) 00:17:53.134 9.434 - 9.481: 98.7623% ( 1) 00:17:53.134 9.481 - 9.529: 98.7697% ( 1) 00:17:53.134 9.529 - 9.576: 98.7771% ( 1) 00:17:53.134 9.576 - 9.624: 98.7846% ( 1) 00:17:53.134 9.861 - 9.908: 98.7920% ( 1) 00:17:53.134 9.956 - 10.003: 98.8068% ( 2) 00:17:53.134 10.098 - 10.145: 98.8142% ( 1) 00:17:53.134 10.193 - 10.240: 98.8216% ( 1) 00:17:53.134 10.430 - 10.477: 98.8290% ( 1) 00:17:53.134 10.524 - 10.572: 98.8438% ( 2) 00:17:53.134 10.856 - 10.904: 98.8513% ( 1) 00:17:53.134 10.951 - 10.999: 98.8587% ( 1) 00:17:53.134 11.046 - 11.093: 98.8661% ( 1) 00:17:53.134 11.093 - 11.141: 98.8735% ( 1) 00:17:53.134 11.141 - 11.188: 98.8883% ( 2) 00:17:53.134 11.662 - 11.710: 98.8957% ( 1) 00:17:53.134 11.710 - 11.757: 98.9180% ( 3) 00:17:53.134 11.757 - 11.804: 98.9254% ( 1) 00:17:53.134 11.804 - 11.852: 98.9328% ( 1) 00:17:53.134 11.947 - 11.994: 98.9402% ( 1) 00:17:53.134 11.994 - 12.041: 98.9476% ( 1) 00:17:53.134 12.705 - 12.800: 98.9550% ( 1) 00:17:53.134 12.800 - 12.895: 98.9624% ( 1) 00:17:53.134 12.895 - 12.990: 98.9698% ( 1) 00:17:53.134 13.084 - 13.179: 98.9847% ( 2) 00:17:53.134 13.274 - 13.369: 98.9921% ( 1) 00:17:53.134 13.464 - 13.559: 98.9995% ( 1) 00:17:53.134 13.748 - 13.843: 99.0069% ( 1) 00:17:53.134 13.843 - 13.938: 99.0217% ( 2) 00:17:53.134 13.938 - 14.033: 99.0291% ( 1) 00:17:53.134 14.033 - 14.127: 99.0439% ( 2) 00:17:53.134 14.317 - 14.412: 99.0588% ( 2) 00:17:53.134 14.696 - 14.791: 99.0662% ( 1) 00:17:53.134 15.834 - 15.929: 99.0736% ( 1) 00:17:53.134 17.161 - 17.256: 99.0884% ( 2) 00:17:53.134 17.256 - 17.351: 99.1106% ( 3) 00:17:53.134 17.351 - 17.446: 99.1255% ( 2) 00:17:53.134 17.446 - 17.541: 99.1551% ( 4) 00:17:53.134 17.541 - 17.636: 99.1848% ( 4) 00:17:53.134 17.636 - 17.730: 99.2515% ( 9) 00:17:53.134 17.730 - 17.825: 99.2737% ( 3) 00:17:53.134 17.825 - 17.920: 99.3108% ( 5) 00:17:53.134 17.920 - 18.015: 99.3700% ( 8) 00:17:53.134 18.015 - 18.110: 99.3849% ( 2) 00:17:53.134 18.110 - 18.204: 99.4516% ( 9) 00:17:53.134 18.204 - 18.299: 99.5405% ( 12) 00:17:53.134 18.299 - 18.394: 99.5776% ( 5) 00:17:53.134 18.394 - 18.489: 99.6294% ( 7) 00:17:53.134 18.489 - 18.584: 99.6739% ( 6) 00:17:53.134 18.584 - 18.679: 99.7035% ( 4) 00:17:53.134 18.679 - 18.773: 99.7332% ( 4) 00:17:53.134 18.773 - 18.868: 99.7554% ( 3) 00:17:53.134 18.963 - 19.058: 99.7703% ( 2) 00:17:53.134 19.058 - 19.153: 99.7777% ( 1) 00:17:53.134 19.153 - 19.247: 99.7851% ( 1) 00:17:53.134 19.247 - 19.342: 99.7925% ( 1) 00:17:53.134 19.627 - 19.721: 99.7999% ( 1) 00:17:53.134 20.385 - 20.480: 99.8147% ( 2) 00:17:53.134 20.480 - 20.575: 99.8221% ( 1) 00:17:53.134 22.376 - 22.471: 99.8295% ( 1) 00:17:53.134 22.850 - 22.945: 99.8370% ( 1) 00:17:53.134 23.609 - 23.704: 99.8444% ( 1) 00:17:53.134 25.979 - 26.169: 99.8518% ( 1) 00:17:53.134 26.359 - 26.548: 99.8592% ( 1) 00:17:53.134 26.738 - 26.927: 99.8666% ( 1) 00:17:53.134 27.496 - 27.686: 99.8740% ( 1) 00:17:53.134 31.858 - 32.047: 99.8814% ( 1) 00:17:53.134 33.375 - 33.564: 99.8888% ( 1) 00:17:53.134 3932.160 - 3956.433: 99.8962% ( 1) 00:17:53.134 3980.705 - 4004.978: 99.9852% ( 12) 00:17:53.134 4004.978 - 4029.250: 100.0000% ( 2) 00:17:53.134 00:17:53.134 Complete histogram 00:17:53.134 ================== 00:17:53.134 Range in us Cumulative Count 00:17:53.134 2.062 - 2.074: 3.1424% ( 424) 00:17:53.134 2.074 - 2.086: 27.2660% ( 3255) 00:17:53.134 2.086 - 2.098: 31.0531% ( 511) 00:17:53.134 2.098 - 2.110: 40.9546% ( 1336) 00:17:53.134 2.110 - 2.121: 58.1190% ( 2316) 00:17:53.134 2.121 - 2.133: 60.0608% ( 262) 00:17:53.134 2.133 - 2.145: 65.1301% ( 684) 00:17:53.134 2.145 - 2.157: 72.2004% ( 954) 00:17:53.134 2.157 - 2.169: 72.9415% ( 100) 00:17:53.134 2.169 - 2.181: 77.6106% ( 630) 00:17:53.134 2.181 - 2.193: 82.1389% ( 611) 00:17:53.134 2.193 - 2.204: 82.7911% ( 88) 00:17:53.134 2.204 - 2.216: 84.6587% ( 252) 00:17:53.134 2.216 - 2.228: 88.9795% ( 583) 00:17:53.134 2.228 - 2.240: 90.2764% ( 175) 00:17:53.134 2.240 - 2.252: 92.0329% ( 237) 00:17:53.134 2.252 - 2.264: 93.8413% ( 244) 00:17:53.134 2.264 - 2.276: 94.1081% ( 36) 00:17:53.134 2.276 - 2.287: 94.4045% ( 40) 00:17:53.134 2.287 - 2.299: 95.0419% ( 86) 00:17:53.134 2.299 - 2.311: 95.5829% ( 73) 00:17:53.134 2.311 - 2.323: 95.7978% ( 29) 00:17:53.134 2.323 - 2.335: 95.8497% ( 7) 00:17:53.134 2.335 - 2.347: 95.9090% ( 8) 00:17:53.134 2.347 - 2.359: 95.9979% ( 12) 00:17:53.134 2.359 - 2.370: 96.2721% ( 37) 00:17:53.134 2.370 - 2.382: 96.7242% ( 61) 00:17:53.134 2.382 - 2.394: 97.1096% ( 52) 00:17:53.134 2.394 - 2.406: 97.4209% ( 42) 00:17:53.134 2.406 - 2.418: 97.6432% ( 30) 00:17:53.134 2.418 - 2.430: 97.7692% ( 17) 00:17:53.134 2.430 - 2.441: 97.9100% ( 19) 00:17:53.134 2.441 - 2.453: 97.9916% ( 11) 00:17:53.134 2.453 - 2.465: 98.0879% ( 13) 00:17:53.134 2.465 - 2.477: 98.1917% ( 14) 00:17:53.134 2.477 - 2.489: 98.2732% ( 11) 00:17:53.134 2.489 - 2.501: 98.3176% ( 6) 00:17:53.134 2.501 - 2.513: 98.3621% ( 6) 00:17:53.134 2.513 - 2.524: 98.3918% ( 4) 00:17:53.134 2.536 - 2.548: 98.4066% ( 2) 00:17:53.134 2.548 - 2.560: 98.4436% ( 5) 00:17:53.134 2.560 - 2.572: 98.4510% ( 1) 00:17:53.134 2.584 - 2.596: 98.4585% ( 1) 00:17:53.134 2.667 - 2.679: 98.4659% ( 1) 00:17:53.134 2.679 - 2.690: 98.4733% ( 1) 00:17:53.134 2.714 - 2.726: 98.4955% ( 3) 00:17:53.134 2.726 - 2.738: 98.5029% ( 1) 00:17:53.134 2.750 - 2.761: 98.5103% ( 1) 00:17:53.134 2.951 - 2.963: 98.5177% ( 1) 00:17:53.134 3.413 - 3.437: 98.5252% ( 1) 00:17:53.134 3.437 - 3.461: 98.5326% ( 1) 00:17:53.134 3.461 - 3.484: 98.5400% ( 1) 00:17:53.134 3.484 - 3.508: 98.5548% ( 2) 00:17:53.134 3.508 - 3.532: 98.5696% ( 2) 00:17:53.134 3.532 - 3.556: 98.5770% ( 1) 00:17:53.134 3.556 - 3.579: 98.5919% ( 2) 00:17:53.134 3.579 - 3.603: 98.6067% ( 2) 00:17:53.134 3.603 - 3.627: 98.6215% ( 2) 00:17:53.134 3.627 - 3.650: 98.6289% ( 1) 00:17:53.134 3.650 - 3.674: 98.6363% ( 1) 00:17:53.134 3.674 - 3.698: 9[2024-07-23 06:13:46.149390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:53.134 8.6437% ( 1) 00:17:53.134 3.721 - 3.745: 98.6512% ( 1) 00:17:53.134 3.745 - 3.769: 98.6586% ( 1) 00:17:53.134 3.769 - 3.793: 98.6660% ( 1) 00:17:53.134 3.793 - 3.816: 98.6808% ( 2) 00:17:53.134 3.840 - 3.864: 98.6956% ( 2) 00:17:53.134 3.887 - 3.911: 98.7104% ( 2) 00:17:53.134 3.911 - 3.935: 98.7179% ( 1) 00:17:53.134 3.935 - 3.959: 98.7253% ( 1) 00:17:53.134 3.959 - 3.982: 98.7327% ( 1) 00:17:53.134 3.982 - 4.006: 98.7401% ( 1) 00:17:53.134 4.006 - 4.030: 98.7475% ( 1) 00:17:53.134 4.148 - 4.172: 98.7549% ( 1) 00:17:53.134 4.243 - 4.267: 98.7623% ( 1) 00:17:53.134 4.290 - 4.314: 98.7697% ( 1) 00:17:53.134 4.907 - 4.930: 98.7771% ( 1) 00:17:53.134 4.930 - 4.954: 98.7846% ( 1) 00:17:53.134 4.954 - 4.978: 98.7994% ( 2) 00:17:53.134 5.167 - 5.191: 98.8068% ( 1) 00:17:53.134 5.239 - 5.262: 98.8142% ( 1) 00:17:53.134 5.381 - 5.404: 98.8216% ( 1) 00:17:53.134 5.547 - 5.570: 98.8290% ( 1) 00:17:53.134 5.594 - 5.618: 98.8364% ( 1) 00:17:53.134 5.665 - 5.689: 98.8438% ( 1) 00:17:53.134 5.713 - 5.736: 98.8513% ( 1) 00:17:53.134 5.973 - 5.997: 98.8587% ( 1) 00:17:53.134 6.068 - 6.116: 98.8661% ( 1) 00:17:53.134 6.353 - 6.400: 98.8809% ( 2) 00:17:53.134 6.542 - 6.590: 98.8883% ( 1) 00:17:53.134 6.637 - 6.684: 98.8957% ( 1) 00:17:53.134 6.874 - 6.921: 98.9031% ( 1) 00:17:53.134 6.969 - 7.016: 98.9105% ( 1) 00:17:53.134 7.301 - 7.348: 98.9180% ( 1) 00:17:53.134 7.633 - 7.680: 98.9254% ( 1) 00:17:53.134 15.644 - 15.739: 98.9402% ( 2) 00:17:53.134 15.739 - 15.834: 98.9550% ( 2) 00:17:53.134 15.834 - 15.929: 98.9624% ( 1) 00:17:53.134 15.929 - 16.024: 98.9772% ( 2) 00:17:53.134 16.024 - 16.119: 98.9921% ( 2) 00:17:53.134 16.119 - 16.213: 99.0217% ( 4) 00:17:53.134 16.213 - 16.308: 99.0736% ( 7) 00:17:53.134 16.308 - 16.403: 99.0958% ( 3) 00:17:53.134 16.403 - 16.498: 99.1625% ( 9) 00:17:53.134 16.498 - 16.593: 99.2218% ( 8) 00:17:53.134 16.593 - 16.687: 99.2663% ( 6) 00:17:53.134 16.687 - 16.782: 99.2885% ( 3) 00:17:53.135 16.782 - 16.877: 99.3404% ( 7) 00:17:53.135 16.877 - 16.972: 99.3552% ( 2) 00:17:53.135 16.972 - 17.067: 99.3700% ( 2) 00:17:53.135 17.161 - 17.256: 99.3923% ( 3) 00:17:53.135 17.636 - 17.730: 99.4071% ( 2) 00:17:53.135 17.825 - 17.920: 99.4145% ( 1) 00:17:53.135 17.920 - 18.015: 99.4293% ( 2) 00:17:53.135 18.868 - 18.963: 99.4367% ( 1) 00:17:53.135 31.668 - 31.858: 99.4442% ( 1) 00:17:53.135 47.407 - 47.597: 99.4516% ( 1) 00:17:53.135 3980.705 - 4004.978: 99.9111% ( 62) 00:17:53.135 4004.978 - 4029.250: 100.0000% ( 12) 00:17:53.135 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:53.135 [ 00:17:53.135 { 00:17:53.135 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:53.135 "subtype": "Discovery", 00:17:53.135 "listen_addresses": [], 00:17:53.135 "allow_any_host": true, 00:17:53.135 "hosts": [] 00:17:53.135 }, 00:17:53.135 { 00:17:53.135 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:53.135 "subtype": "NVMe", 00:17:53.135 "listen_addresses": [ 00:17:53.135 { 00:17:53.135 "trtype": "VFIOUSER", 00:17:53.135 "adrfam": "IPv4", 00:17:53.135 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:53.135 "trsvcid": "0" 00:17:53.135 } 00:17:53.135 ], 00:17:53.135 "allow_any_host": true, 00:17:53.135 "hosts": [], 00:17:53.135 "serial_number": "SPDK1", 00:17:53.135 "model_number": "SPDK bdev Controller", 00:17:53.135 "max_namespaces": 32, 00:17:53.135 "min_cntlid": 1, 00:17:53.135 "max_cntlid": 65519, 00:17:53.135 "namespaces": [ 00:17:53.135 { 00:17:53.135 "nsid": 1, 00:17:53.135 "bdev_name": "Malloc1", 00:17:53.135 "name": "Malloc1", 00:17:53.135 "nguid": "F5835EBABFB74245858A26A7FA2EAB2C", 00:17:53.135 "uuid": "f5835eba-bfb7-4245-858a-26a7fa2eab2c" 00:17:53.135 }, 00:17:53.135 { 00:17:53.135 "nsid": 2, 00:17:53.135 "bdev_name": "Malloc3", 00:17:53.135 "name": "Malloc3", 00:17:53.135 "nguid": "1532D27183EA4831963FD0FA37DA3A48", 00:17:53.135 "uuid": "1532d271-83ea-4831-963f-d0fa37da3a48" 00:17:53.135 } 00:17:53.135 ] 00:17:53.135 }, 00:17:53.135 { 00:17:53.135 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:53.135 "subtype": "NVMe", 00:17:53.135 "listen_addresses": [ 00:17:53.135 { 00:17:53.135 "trtype": "VFIOUSER", 00:17:53.135 "adrfam": "IPv4", 00:17:53.135 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:53.135 "trsvcid": "0" 00:17:53.135 } 00:17:53.135 ], 00:17:53.135 "allow_any_host": true, 00:17:53.135 "hosts": [], 00:17:53.135 "serial_number": "SPDK2", 00:17:53.135 "model_number": "SPDK bdev Controller", 00:17:53.135 "max_namespaces": 32, 00:17:53.135 "min_cntlid": 1, 00:17:53.135 "max_cntlid": 65519, 00:17:53.135 "namespaces": [ 00:17:53.135 { 00:17:53.135 "nsid": 1, 00:17:53.135 "bdev_name": "Malloc2", 00:17:53.135 "name": "Malloc2", 00:17:53.135 "nguid": "974ED5866CA5490BBD5BFE4D99A31542", 00:17:53.135 "uuid": "974ed586-6ca5-490b-bd5b-fe4d99a31542" 00:17:53.135 } 00:17:53.135 ] 00:17:53.135 } 00:17:53.135 ] 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1736425 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:53.135 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:53.394 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.394 [2024-07-23 06:13:46.619138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:53.394 Malloc4 00:17:53.394 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:53.652 [2024-07-23 06:13:46.967667] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:53.652 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:53.917 Asynchronous Event Request test 00:17:53.917 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:53.917 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:53.917 Registering asynchronous event callbacks... 00:17:53.917 Starting namespace attribute notice tests for all controllers... 00:17:53.917 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:53.917 aer_cb - Changed Namespace 00:17:53.917 Cleaning up... 00:17:53.917 [ 00:17:53.917 { 00:17:53.917 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:53.917 "subtype": "Discovery", 00:17:53.917 "listen_addresses": [], 00:17:53.917 "allow_any_host": true, 00:17:53.917 "hosts": [] 00:17:53.917 }, 00:17:53.917 { 00:17:53.917 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:53.917 "subtype": "NVMe", 00:17:53.917 "listen_addresses": [ 00:17:53.917 { 00:17:53.917 "trtype": "VFIOUSER", 00:17:53.917 "adrfam": "IPv4", 00:17:53.917 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:53.917 "trsvcid": "0" 00:17:53.917 } 00:17:53.917 ], 00:17:53.917 "allow_any_host": true, 00:17:53.917 "hosts": [], 00:17:53.917 "serial_number": "SPDK1", 00:17:53.917 "model_number": "SPDK bdev Controller", 00:17:53.917 "max_namespaces": 32, 00:17:53.917 "min_cntlid": 1, 00:17:53.917 "max_cntlid": 65519, 00:17:53.917 "namespaces": [ 00:17:53.917 { 00:17:53.917 "nsid": 1, 00:17:53.917 "bdev_name": "Malloc1", 00:17:53.917 "name": "Malloc1", 00:17:53.917 "nguid": "F5835EBABFB74245858A26A7FA2EAB2C", 00:17:53.917 "uuid": "f5835eba-bfb7-4245-858a-26a7fa2eab2c" 00:17:53.917 }, 00:17:53.917 { 00:17:53.917 "nsid": 2, 00:17:53.917 "bdev_name": "Malloc3", 00:17:53.917 "name": "Malloc3", 00:17:53.917 "nguid": "1532D27183EA4831963FD0FA37DA3A48", 00:17:53.917 "uuid": "1532d271-83ea-4831-963f-d0fa37da3a48" 00:17:53.917 } 00:17:53.917 ] 00:17:53.917 }, 00:17:53.917 { 00:17:53.917 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:53.917 "subtype": "NVMe", 00:17:53.917 "listen_addresses": [ 00:17:53.917 { 00:17:53.917 "trtype": "VFIOUSER", 00:17:53.917 "adrfam": "IPv4", 00:17:53.917 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:53.917 "trsvcid": "0" 00:17:53.917 } 00:17:53.917 ], 00:17:53.917 "allow_any_host": true, 00:17:53.917 "hosts": [], 00:17:53.917 "serial_number": "SPDK2", 00:17:53.917 "model_number": "SPDK bdev Controller", 00:17:53.917 "max_namespaces": 32, 00:17:53.917 "min_cntlid": 1, 00:17:53.917 "max_cntlid": 65519, 00:17:53.917 "namespaces": [ 00:17:53.917 { 00:17:53.917 "nsid": 1, 00:17:53.918 "bdev_name": "Malloc2", 00:17:53.918 "name": "Malloc2", 00:17:53.918 "nguid": "974ED5866CA5490BBD5BFE4D99A31542", 00:17:53.918 "uuid": "974ed586-6ca5-490b-bd5b-fe4d99a31542" 00:17:53.918 }, 00:17:53.918 { 00:17:53.918 "nsid": 2, 00:17:53.918 "bdev_name": "Malloc4", 00:17:53.918 "name": "Malloc4", 00:17:53.918 "nguid": "91937EACAA59408CB596D4E95A98F875", 00:17:53.918 "uuid": "91937eac-aa59-408c-b596-d4e95a98f875" 00:17:53.918 } 00:17:53.918 ] 00:17:53.918 } 00:17:53.918 ] 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1736425 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1730839 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1730839 ']' 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1730839 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1730839 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1730839' 00:17:53.918 killing process with pid 1730839 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1730839 00:17:53.918 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1730839 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1736568 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1736568' 00:17:54.537 Process pid: 1736568 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1736568 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1736568 ']' 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.537 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:54.537 [2024-07-23 06:13:47.618761] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:54.537 [2024-07-23 06:13:47.619870] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:54.537 [2024-07-23 06:13:47.619944] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.537 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.537 [2024-07-23 06:13:47.652433] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:54.537 [2024-07-23 06:13:47.683760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.537 [2024-07-23 06:13:47.773048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.537 [2024-07-23 06:13:47.773108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.537 [2024-07-23 06:13:47.773134] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.537 [2024-07-23 06:13:47.773147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.537 [2024-07-23 06:13:47.773159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.537 [2024-07-23 06:13:47.773246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.537 [2024-07-23 06:13:47.773317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.537 [2024-07-23 06:13:47.773407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.537 [2024-07-23 06:13:47.773409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.537 [2024-07-23 06:13:47.872878] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:54.537 [2024-07-23 06:13:47.873065] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:54.537 [2024-07-23 06:13:47.873352] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:54.537 [2024-07-23 06:13:47.873998] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:54.537 [2024-07-23 06:13:47.874228] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:54.797 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.798 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:17:54.798 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:55.733 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:55.993 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:55.993 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:55.993 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:55.993 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:55.993 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:56.251 Malloc1 00:17:56.251 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:56.510 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:56.768 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:57.025 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:57.025 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:57.025 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:57.283 Malloc2 00:17:57.283 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:57.541 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:57.799 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1736568 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1736568 ']' 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1736568 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736568 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736568' 00:17:58.058 killing process with pid 1736568 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1736568 00:17:58.058 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1736568 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:58.626 00:17:58.626 real 0m52.737s 00:17:58.626 user 3m28.556s 00:17:58.626 sys 0m4.347s 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:58.626 ************************************ 00:17:58.626 END TEST nvmf_vfio_user 00:17:58.626 ************************************ 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:58.626 ************************************ 00:17:58.626 START TEST nvmf_vfio_user_nvme_compliance 00:17:58.626 ************************************ 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:58.626 * Looking for test storage... 00:17:58.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1737047 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1737047' 00:17:58.626 Process pid: 1737047 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1737047 00:17:58.626 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1737047 ']' 00:17:58.627 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.627 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.627 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.627 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.627 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.627 [2024-07-23 06:13:51.835288] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:58.627 [2024-07-23 06:13:51.835380] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.627 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.627 [2024-07-23 06:13:51.870295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:58.627 [2024-07-23 06:13:51.898788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:58.886 [2024-07-23 06:13:51.987541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.886 [2024-07-23 06:13:51.987607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.886 [2024-07-23 06:13:51.987628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.886 [2024-07-23 06:13:51.987641] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.886 [2024-07-23 06:13:51.987651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.886 [2024-07-23 06:13:51.987717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.886 [2024-07-23 06:13:51.987786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.887 [2024-07-23 06:13:51.987788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.887 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.887 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:17:58.887 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:59.825 malloc0 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:59.825 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.826 06:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:00.091 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.091 00:18:00.091 00:18:00.091 CUnit - A unit testing framework for C - Version 2.1-3 00:18:00.091 http://cunit.sourceforge.net/ 00:18:00.091 00:18:00.091 00:18:00.091 Suite: nvme_compliance 00:18:00.091 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-23 06:13:53.318161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.091 [2024-07-23 06:13:53.319578] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:00.091 [2024-07-23 06:13:53.319626] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:00.091 [2024-07-23 06:13:53.319641] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:00.091 [2024-07-23 06:13:53.321176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.091 passed 00:18:00.091 Test: admin_identify_ctrlr_verify_fused ...[2024-07-23 06:13:53.405751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.091 [2024-07-23 06:13:53.408776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.350 passed 00:18:00.350 Test: admin_identify_ns ...[2024-07-23 06:13:53.495197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.350 [2024-07-23 06:13:53.555661] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:00.350 [2024-07-23 06:13:53.563647] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:00.350 [2024-07-23 06:13:53.584743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.350 passed 00:18:00.350 Test: admin_get_features_mandatory_features ...[2024-07-23 06:13:53.666935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.350 [2024-07-23 06:13:53.669952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.610 passed 00:18:00.610 Test: admin_get_features_optional_features ...[2024-07-23 06:13:53.754470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.610 [2024-07-23 06:13:53.757488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.610 passed 00:18:00.610 Test: admin_set_features_number_of_queues ...[2024-07-23 06:13:53.842124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.610 [2024-07-23 06:13:53.946737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.868 passed 00:18:00.868 Test: admin_get_log_page_mandatory_logs ...[2024-07-23 06:13:54.030404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.868 [2024-07-23 06:13:54.033432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.868 passed 00:18:00.868 Test: admin_get_log_page_with_lpo ...[2024-07-23 06:13:54.117587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.868 [2024-07-23 06:13:54.183628] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:00.868 [2024-07-23 06:13:54.196729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.127 passed 00:18:01.127 Test: fabric_property_get ...[2024-07-23 06:13:54.280011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:01.127 [2024-07-23 06:13:54.281285] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:01.127 [2024-07-23 06:13:54.284037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.127 passed 00:18:01.127 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-23 06:13:54.368565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:01.127 [2024-07-23 06:13:54.369875] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:01.127 [2024-07-23 06:13:54.371584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.127 passed 00:18:01.127 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-23 06:13:54.455824] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:01.384 [2024-07-23 06:13:54.539637] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:01.384 [2024-07-23 06:13:54.555625] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:01.384 [2024-07-23 06:13:54.560737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.384 passed 00:18:01.384 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-23 06:13:54.646143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:01.384 [2024-07-23 06:13:54.647435] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:01.384 [2024-07-23 06:13:54.649169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.384 passed 00:18:01.642 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-23 06:13:54.732122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:01.642 [2024-07-23 06:13:54.807635] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:01.642 [2024-07-23 06:13:54.831639] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:01.642 [2024-07-23 06:13:54.836744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.642 passed 00:18:01.642 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-23 06:13:54.919245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:01.642 [2024-07-23 06:13:54.920536] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:01.642 [2024-07-23 06:13:54.920573] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:01.642 [2024-07-23 06:13:54.922270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.642 passed 00:18:01.900 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-23 06:13:55.005343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:01.900 [2024-07-23 06:13:55.096627] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:01.900 [2024-07-23 06:13:55.104637] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:01.900 [2024-07-23 06:13:55.112622] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:01.900 [2024-07-23 06:13:55.120621] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:01.900 [2024-07-23 06:13:55.149737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.900 passed 00:18:01.900 Test: admin_create_io_sq_verify_pc ...[2024-07-23 06:13:55.233232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:02.160 [2024-07-23 06:13:55.249637] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:02.160 [2024-07-23 06:13:55.267537] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:02.160 passed 00:18:02.160 Test: admin_create_io_qp_max_qps ...[2024-07-23 06:13:55.348089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:03.539 [2024-07-23 06:13:56.447630] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:03.539 [2024-07-23 06:13:56.836407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:03.539 passed 00:18:03.798 Test: admin_create_io_sq_shared_cq ...[2024-07-23 06:13:56.917645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:03.798 [2024-07-23 06:13:57.051636] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:03.798 [2024-07-23 06:13:57.088723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:03.798 passed 00:18:03.798 00:18:03.798 Run Summary: Type Total Ran Passed Failed Inactive 00:18:03.798 suites 1 1 n/a 0 0 00:18:03.798 tests 18 18 18 0 0 00:18:03.798 asserts 360 360 360 0 n/a 00:18:03.798 00:18:03.798 Elapsed time = 1.563 seconds 00:18:03.798 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1737047 00:18:03.798 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1737047 ']' 00:18:03.798 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1737047 00:18:03.798 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:18:04.056 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.056 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737047 00:18:04.056 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.056 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.056 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737047' 00:18:04.056 killing process with pid 1737047 00:18:04.056 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1737047 00:18:04.056 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1737047 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:04.315 00:18:04.315 real 0m5.702s 00:18:04.315 user 0m16.036s 00:18:04.315 sys 0m0.545s 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:04.315 ************************************ 00:18:04.315 END TEST nvmf_vfio_user_nvme_compliance 00:18:04.315 ************************************ 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:04.315 ************************************ 00:18:04.315 START TEST nvmf_vfio_user_fuzz 00:18:04.315 ************************************ 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:04.315 * Looking for test storage... 00:18:04.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1737823 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1737823' 00:18:04.315 Process pid: 1737823 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1737823 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1737823 ']' 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.315 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:04.574 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.574 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:18:04.574 06:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.946 malloc0 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.946 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.947 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:05.947 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:38.060 Fuzzing completed. Shutting down the fuzz application 00:18:38.060 00:18:38.060 Dumping successful admin opcodes: 00:18:38.060 8, 9, 10, 24, 00:18:38.060 Dumping successful io opcodes: 00:18:38.060 0, 00:18:38.060 NS: 0x200003a1ef00 I/O qp, Total commands completed: 565067, total successful commands: 2171, random_seed: 3387471616 00:18:38.060 NS: 0x200003a1ef00 admin qp, Total commands completed: 72111, total successful commands: 568, random_seed: 2567352704 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1737823 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1737823 ']' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1737823 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737823 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737823' 00:18:38.060 killing process with pid 1737823 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1737823 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1737823 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:38.060 00:18:38.060 real 0m33.204s 00:18:38.060 user 0m33.351s 00:18:38.060 sys 0m28.400s 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:38.060 ************************************ 00:18:38.060 END TEST nvmf_vfio_user_fuzz 00:18:38.060 ************************************ 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:38.060 ************************************ 00:18:38.060 START TEST nvmf_auth_target 00:18:38.060 ************************************ 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:38.060 * Looking for test storage... 00:18:38.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.060 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.061 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:39.964 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:39.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:39.965 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:39.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:39.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:39.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:18:39.965 00:18:39.965 --- 10.0.0.2 ping statistics --- 00:18:39.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.965 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:18:39.965 00:18:39.965 --- 10.0.0.1 ping statistics --- 00:18:39.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.965 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1743336 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1743336 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1743336 ']' 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.965 06:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1743359 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:39.965 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4318763b45189f6edc17429f4668ea8da255a7653de55671 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kqV 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4318763b45189f6edc17429f4668ea8da255a7653de55671 0 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4318763b45189f6edc17429f4668ea8da255a7653de55671 0 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4318763b45189f6edc17429f4668ea8da255a7653de55671 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:39.966 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kqV 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kqV 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.kqV 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9eb0429c5acb51229ee27184e1f47add6aac8f286c6749b0ba287072e891e5d4 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DIG 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9eb0429c5acb51229ee27184e1f47add6aac8f286c6749b0ba287072e891e5d4 3 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9eb0429c5acb51229ee27184e1f47add6aac8f286c6749b0ba287072e891e5d4 3 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9eb0429c5acb51229ee27184e1f47add6aac8f286c6749b0ba287072e891e5d4 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DIG 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DIG 00:18:40.224 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.DIG 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=152eb7ce637d8904baf2de222af54bd2 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FGn 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 152eb7ce637d8904baf2de222af54bd2 1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 152eb7ce637d8904baf2de222af54bd2 1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=152eb7ce637d8904baf2de222af54bd2 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FGn 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FGn 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.FGn 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42f577f528fc9ce46a7c3e91ff9aa87d6bec0c5e1bd1d881 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zWp 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42f577f528fc9ce46a7c3e91ff9aa87d6bec0c5e1bd1d881 2 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42f577f528fc9ce46a7c3e91ff9aa87d6bec0c5e1bd1d881 2 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42f577f528fc9ce46a7c3e91ff9aa87d6bec0c5e1bd1d881 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zWp 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zWp 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.zWp 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=68bc91107f71be6f0d481ef222b33b90bbd9581b11e77e64 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KKt 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 68bc91107f71be6f0d481ef222b33b90bbd9581b11e77e64 2 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 68bc91107f71be6f0d481ef222b33b90bbd9581b11e77e64 2 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=68bc91107f71be6f0d481ef222b33b90bbd9581b11e77e64 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KKt 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KKt 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.KKt 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=faa918f8a5e8b46a25e817c74b059d05 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8pY 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key faa918f8a5e8b46a25e817c74b059d05 1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 faa918f8a5e8b46a25e817c74b059d05 1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=faa918f8a5e8b46a25e817c74b059d05 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:40.225 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8pY 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8pY 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.8pY 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8581efe9354a4bd24ce8dd09291269a0c79ac1d8bf97eaedbcfebc59acdb3cdc 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.C4u 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8581efe9354a4bd24ce8dd09291269a0c79ac1d8bf97eaedbcfebc59acdb3cdc 3 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8581efe9354a4bd24ce8dd09291269a0c79ac1d8bf97eaedbcfebc59acdb3cdc 3 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8581efe9354a4bd24ce8dd09291269a0c79ac1d8bf97eaedbcfebc59acdb3cdc 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.C4u 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.C4u 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.C4u 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1743336 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1743336 ']' 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.483 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.740 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.740 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:40.740 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1743359 /var/tmp/host.sock 00:18:40.740 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1743359 ']' 00:18:40.740 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:40.740 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.740 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:40.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:40.741 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.741 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kqV 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kqV 00:18:40.998 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kqV 00:18:41.256 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.DIG ]] 00:18:41.256 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DIG 00:18:41.256 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.256 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.256 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.256 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DIG 00:18:41.256 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DIG 00:18:41.513 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:41.513 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.FGn 00:18:41.513 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.513 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.514 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.514 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.FGn 00:18:41.514 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.FGn 00:18:41.771 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.zWp ]] 00:18:41.771 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zWp 00:18:41.771 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.771 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.771 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.771 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zWp 00:18:41.771 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zWp 00:18:42.029 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:42.029 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KKt 00:18:42.029 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.029 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.029 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.029 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KKt 00:18:42.029 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KKt 00:18:42.286 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.8pY ]] 00:18:42.286 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8pY 00:18:42.286 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.286 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.286 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.286 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8pY 00:18:42.286 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8pY 00:18:42.543 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:42.543 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.C4u 00:18:42.543 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.543 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.543 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.543 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.C4u 00:18:42.543 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.C4u 00:18:42.801 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:42.801 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:42.801 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.801 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.801 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.801 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.059 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.060 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.318 00:18:43.318 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.318 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.318 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.576 { 00:18:43.576 "cntlid": 1, 00:18:43.576 "qid": 0, 00:18:43.576 "state": "enabled", 00:18:43.576 "thread": "nvmf_tgt_poll_group_000", 00:18:43.576 "listen_address": { 00:18:43.576 "trtype": "TCP", 00:18:43.576 "adrfam": "IPv4", 00:18:43.576 "traddr": "10.0.0.2", 00:18:43.576 "trsvcid": "4420" 00:18:43.576 }, 00:18:43.576 "peer_address": { 00:18:43.576 "trtype": "TCP", 00:18:43.576 "adrfam": "IPv4", 00:18:43.576 "traddr": "10.0.0.1", 00:18:43.576 "trsvcid": "34716" 00:18:43.576 }, 00:18:43.576 "auth": { 00:18:43.576 "state": "completed", 00:18:43.576 "digest": "sha256", 00:18:43.576 "dhgroup": "null" 00:18:43.576 } 00:18:43.576 } 00:18:43.576 ]' 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.576 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.834 06:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:18:44.767 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.767 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.767 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.767 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.767 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.767 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.767 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.767 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.025 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.282 00:18:45.539 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.539 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.539 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.539 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.539 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.539 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.539 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.797 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.797 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.797 { 00:18:45.797 "cntlid": 3, 00:18:45.797 "qid": 0, 00:18:45.797 "state": "enabled", 00:18:45.797 "thread": "nvmf_tgt_poll_group_000", 00:18:45.797 "listen_address": { 00:18:45.797 "trtype": "TCP", 00:18:45.797 "adrfam": "IPv4", 00:18:45.797 "traddr": "10.0.0.2", 00:18:45.797 "trsvcid": "4420" 00:18:45.797 }, 00:18:45.797 "peer_address": { 00:18:45.797 "trtype": "TCP", 00:18:45.797 "adrfam": "IPv4", 00:18:45.797 "traddr": "10.0.0.1", 00:18:45.797 "trsvcid": "48858" 00:18:45.797 }, 00:18:45.797 "auth": { 00:18:45.797 "state": "completed", 00:18:45.797 "digest": "sha256", 00:18:45.797 "dhgroup": "null" 00:18:45.797 } 00:18:45.797 } 00:18:45.797 ]' 00:18:45.797 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.797 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.797 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.797 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.797 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.797 06:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.797 06:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.797 06:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.055 06:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:18:46.989 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.989 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.989 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.989 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.989 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.989 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.989 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.989 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.246 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.504 00:18:47.504 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.504 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.504 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.762 { 00:18:47.762 "cntlid": 5, 00:18:47.762 "qid": 0, 00:18:47.762 "state": "enabled", 00:18:47.762 "thread": "nvmf_tgt_poll_group_000", 00:18:47.762 "listen_address": { 00:18:47.762 "trtype": "TCP", 00:18:47.762 "adrfam": "IPv4", 00:18:47.762 "traddr": "10.0.0.2", 00:18:47.762 "trsvcid": "4420" 00:18:47.762 }, 00:18:47.762 "peer_address": { 00:18:47.762 "trtype": "TCP", 00:18:47.762 "adrfam": "IPv4", 00:18:47.762 "traddr": "10.0.0.1", 00:18:47.762 "trsvcid": "48872" 00:18:47.762 }, 00:18:47.762 "auth": { 00:18:47.762 "state": "completed", 00:18:47.762 "digest": "sha256", 00:18:47.762 "dhgroup": "null" 00:18:47.762 } 00:18:47.762 } 00:18:47.762 ]' 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.762 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.020 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:48.020 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.020 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.020 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.020 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.279 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:18:49.241 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.241 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.241 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.241 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.241 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.241 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.241 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.241 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.499 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.757 00:18:49.757 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.757 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.757 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.015 { 00:18:50.015 "cntlid": 7, 00:18:50.015 "qid": 0, 00:18:50.015 "state": "enabled", 00:18:50.015 "thread": "nvmf_tgt_poll_group_000", 00:18:50.015 "listen_address": { 00:18:50.015 "trtype": "TCP", 00:18:50.015 "adrfam": "IPv4", 00:18:50.015 "traddr": "10.0.0.2", 00:18:50.015 "trsvcid": "4420" 00:18:50.015 }, 00:18:50.015 "peer_address": { 00:18:50.015 "trtype": "TCP", 00:18:50.015 "adrfam": "IPv4", 00:18:50.015 "traddr": "10.0.0.1", 00:18:50.015 "trsvcid": "48890" 00:18:50.015 }, 00:18:50.015 "auth": { 00:18:50.015 "state": "completed", 00:18:50.015 "digest": "sha256", 00:18:50.015 "dhgroup": "null" 00:18:50.015 } 00:18:50.015 } 00:18:50.015 ]' 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.015 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.273 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.206 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.464 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.029 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.029 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.029 { 00:18:52.029 "cntlid": 9, 00:18:52.029 "qid": 0, 00:18:52.029 "state": "enabled", 00:18:52.029 "thread": "nvmf_tgt_poll_group_000", 00:18:52.029 "listen_address": { 00:18:52.030 "trtype": "TCP", 00:18:52.030 "adrfam": "IPv4", 00:18:52.030 "traddr": "10.0.0.2", 00:18:52.030 "trsvcid": "4420" 00:18:52.030 }, 00:18:52.030 "peer_address": { 00:18:52.030 "trtype": "TCP", 00:18:52.030 "adrfam": "IPv4", 00:18:52.030 "traddr": "10.0.0.1", 00:18:52.030 "trsvcid": "48914" 00:18:52.030 }, 00:18:52.030 "auth": { 00:18:52.030 "state": "completed", 00:18:52.030 "digest": "sha256", 00:18:52.030 "dhgroup": "ffdhe2048" 00:18:52.030 } 00:18:52.030 } 00:18:52.030 ]' 00:18:52.030 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.290 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.290 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.290 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.290 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.290 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.290 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.290 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.549 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:18:53.485 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.485 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.485 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.485 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.485 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.485 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.485 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.485 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.743 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.000 00:18:54.000 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.000 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.000 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.257 { 00:18:54.257 "cntlid": 11, 00:18:54.257 "qid": 0, 00:18:54.257 "state": "enabled", 00:18:54.257 "thread": "nvmf_tgt_poll_group_000", 00:18:54.257 "listen_address": { 00:18:54.257 "trtype": "TCP", 00:18:54.257 "adrfam": "IPv4", 00:18:54.257 "traddr": "10.0.0.2", 00:18:54.257 "trsvcid": "4420" 00:18:54.257 }, 00:18:54.257 "peer_address": { 00:18:54.257 "trtype": "TCP", 00:18:54.257 "adrfam": "IPv4", 00:18:54.257 "traddr": "10.0.0.1", 00:18:54.257 "trsvcid": "49472" 00:18:54.257 }, 00:18:54.257 "auth": { 00:18:54.257 "state": "completed", 00:18:54.257 "digest": "sha256", 00:18:54.257 "dhgroup": "ffdhe2048" 00:18:54.257 } 00:18:54.257 } 00:18:54.257 ]' 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.257 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.515 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.515 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.515 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.515 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.515 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.774 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:18:55.709 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.709 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.709 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.709 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.709 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.709 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.709 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.709 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.967 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:55.967 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.967 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.968 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.225 00:18:56.225 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.225 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.225 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.484 { 00:18:56.484 "cntlid": 13, 00:18:56.484 "qid": 0, 00:18:56.484 "state": "enabled", 00:18:56.484 "thread": "nvmf_tgt_poll_group_000", 00:18:56.484 "listen_address": { 00:18:56.484 "trtype": "TCP", 00:18:56.484 "adrfam": "IPv4", 00:18:56.484 "traddr": "10.0.0.2", 00:18:56.484 "trsvcid": "4420" 00:18:56.484 }, 00:18:56.484 "peer_address": { 00:18:56.484 "trtype": "TCP", 00:18:56.484 "adrfam": "IPv4", 00:18:56.484 "traddr": "10.0.0.1", 00:18:56.484 "trsvcid": "49492" 00:18:56.484 }, 00:18:56.484 "auth": { 00:18:56.484 "state": "completed", 00:18:56.484 "digest": "sha256", 00:18:56.484 "dhgroup": "ffdhe2048" 00:18:56.484 } 00:18:56.484 } 00:18:56.484 ]' 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.484 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.744 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.744 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.744 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.004 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:18:57.937 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.937 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.937 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.937 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.937 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.937 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.937 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.937 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.196 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.454 00:18:58.454 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.454 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.454 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.713 { 00:18:58.713 "cntlid": 15, 00:18:58.713 "qid": 0, 00:18:58.713 "state": "enabled", 00:18:58.713 "thread": "nvmf_tgt_poll_group_000", 00:18:58.713 "listen_address": { 00:18:58.713 "trtype": "TCP", 00:18:58.713 "adrfam": "IPv4", 00:18:58.713 "traddr": "10.0.0.2", 00:18:58.713 "trsvcid": "4420" 00:18:58.713 }, 00:18:58.713 "peer_address": { 00:18:58.713 "trtype": "TCP", 00:18:58.713 "adrfam": "IPv4", 00:18:58.713 "traddr": "10.0.0.1", 00:18:58.713 "trsvcid": "49526" 00:18:58.713 }, 00:18:58.713 "auth": { 00:18:58.713 "state": "completed", 00:18:58.713 "digest": "sha256", 00:18:58.713 "dhgroup": "ffdhe2048" 00:18:58.713 } 00:18:58.713 } 00:18:58.713 ]' 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.713 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.713 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.713 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.713 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.971 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.905 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.162 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.163 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.745 00:19:00.745 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.745 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.745 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.044 { 00:19:01.044 "cntlid": 17, 00:19:01.044 "qid": 0, 00:19:01.044 "state": "enabled", 00:19:01.044 "thread": "nvmf_tgt_poll_group_000", 00:19:01.044 "listen_address": { 00:19:01.044 "trtype": "TCP", 00:19:01.044 "adrfam": "IPv4", 00:19:01.044 "traddr": "10.0.0.2", 00:19:01.044 "trsvcid": "4420" 00:19:01.044 }, 00:19:01.044 "peer_address": { 00:19:01.044 "trtype": "TCP", 00:19:01.044 "adrfam": "IPv4", 00:19:01.044 "traddr": "10.0.0.1", 00:19:01.044 "trsvcid": "49554" 00:19:01.044 }, 00:19:01.044 "auth": { 00:19:01.044 "state": "completed", 00:19:01.044 "digest": "sha256", 00:19:01.044 "dhgroup": "ffdhe3072" 00:19:01.044 } 00:19:01.044 } 00:19:01.044 ]' 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.044 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.045 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.045 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.045 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.303 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:19:02.240 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.240 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.240 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.240 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.240 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.240 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.240 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.240 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.497 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.498 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.498 06:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.756 00:19:02.756 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.756 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.756 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.014 { 00:19:03.014 "cntlid": 19, 00:19:03.014 "qid": 0, 00:19:03.014 "state": "enabled", 00:19:03.014 "thread": "nvmf_tgt_poll_group_000", 00:19:03.014 "listen_address": { 00:19:03.014 "trtype": "TCP", 00:19:03.014 "adrfam": "IPv4", 00:19:03.014 "traddr": "10.0.0.2", 00:19:03.014 "trsvcid": "4420" 00:19:03.014 }, 00:19:03.014 "peer_address": { 00:19:03.014 "trtype": "TCP", 00:19:03.014 "adrfam": "IPv4", 00:19:03.014 "traddr": "10.0.0.1", 00:19:03.014 "trsvcid": "49582" 00:19:03.014 }, 00:19:03.014 "auth": { 00:19:03.014 "state": "completed", 00:19:03.014 "digest": "sha256", 00:19:03.014 "dhgroup": "ffdhe3072" 00:19:03.014 } 00:19:03.014 } 00:19:03.014 ]' 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.014 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.272 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.272 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.272 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.272 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.272 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.530 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:19:04.470 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.470 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.470 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.470 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.470 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.470 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.470 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.470 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.729 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.987 00:19:04.987 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.987 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.987 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.245 { 00:19:05.245 "cntlid": 21, 00:19:05.245 "qid": 0, 00:19:05.245 "state": "enabled", 00:19:05.245 "thread": "nvmf_tgt_poll_group_000", 00:19:05.245 "listen_address": { 00:19:05.245 "trtype": "TCP", 00:19:05.245 "adrfam": "IPv4", 00:19:05.245 "traddr": "10.0.0.2", 00:19:05.245 "trsvcid": "4420" 00:19:05.245 }, 00:19:05.245 "peer_address": { 00:19:05.245 "trtype": "TCP", 00:19:05.245 "adrfam": "IPv4", 00:19:05.245 "traddr": "10.0.0.1", 00:19:05.245 "trsvcid": "53954" 00:19:05.245 }, 00:19:05.245 "auth": { 00:19:05.245 "state": "completed", 00:19:05.245 "digest": "sha256", 00:19:05.245 "dhgroup": "ffdhe3072" 00:19:05.245 } 00:19:05.245 } 00:19:05.245 ]' 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.245 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.503 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.503 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.503 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.503 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.503 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.760 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:19:06.696 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.696 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.696 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.696 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.696 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.696 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.696 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.696 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.955 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.214 00:19:07.214 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.472 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.472 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.730 { 00:19:07.730 "cntlid": 23, 00:19:07.730 "qid": 0, 00:19:07.730 "state": "enabled", 00:19:07.730 "thread": "nvmf_tgt_poll_group_000", 00:19:07.730 "listen_address": { 00:19:07.730 "trtype": "TCP", 00:19:07.730 "adrfam": "IPv4", 00:19:07.730 "traddr": "10.0.0.2", 00:19:07.730 "trsvcid": "4420" 00:19:07.730 }, 00:19:07.730 "peer_address": { 00:19:07.730 "trtype": "TCP", 00:19:07.730 "adrfam": "IPv4", 00:19:07.730 "traddr": "10.0.0.1", 00:19:07.730 "trsvcid": "53984" 00:19:07.730 }, 00:19:07.730 "auth": { 00:19:07.730 "state": "completed", 00:19:07.730 "digest": "sha256", 00:19:07.730 "dhgroup": "ffdhe3072" 00:19:07.730 } 00:19:07.730 } 00:19:07.730 ]' 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.730 06:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.988 06:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.923 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.181 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.748 00:19:09.748 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.748 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.748 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.748 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.748 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.748 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.748 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.748 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.748 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.748 { 00:19:09.748 "cntlid": 25, 00:19:09.748 "qid": 0, 00:19:09.748 "state": "enabled", 00:19:09.748 "thread": "nvmf_tgt_poll_group_000", 00:19:09.748 "listen_address": { 00:19:09.748 "trtype": "TCP", 00:19:09.748 "adrfam": "IPv4", 00:19:09.748 "traddr": "10.0.0.2", 00:19:09.748 "trsvcid": "4420" 00:19:09.748 }, 00:19:09.748 "peer_address": { 00:19:09.748 "trtype": "TCP", 00:19:09.748 "adrfam": "IPv4", 00:19:09.748 "traddr": "10.0.0.1", 00:19:09.748 "trsvcid": "54008" 00:19:09.748 }, 00:19:09.748 "auth": { 00:19:09.748 "state": "completed", 00:19:09.748 "digest": "sha256", 00:19:09.748 "dhgroup": "ffdhe4096" 00:19:09.748 } 00:19:09.748 } 00:19:09.748 ]' 00:19:09.748 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.008 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.008 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.008 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.008 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.008 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.008 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.008 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.276 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:19:11.212 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.212 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.212 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.212 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.212 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.212 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.212 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.212 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.471 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.729 00:19:11.729 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.729 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.729 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.988 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.988 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.988 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.988 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.988 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.988 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.988 { 00:19:11.988 "cntlid": 27, 00:19:11.988 "qid": 0, 00:19:11.988 "state": "enabled", 00:19:11.988 "thread": "nvmf_tgt_poll_group_000", 00:19:11.988 "listen_address": { 00:19:11.988 "trtype": "TCP", 00:19:11.988 "adrfam": "IPv4", 00:19:11.988 "traddr": "10.0.0.2", 00:19:11.988 "trsvcid": "4420" 00:19:11.988 }, 00:19:11.988 "peer_address": { 00:19:11.988 "trtype": "TCP", 00:19:11.988 "adrfam": "IPv4", 00:19:11.988 "traddr": "10.0.0.1", 00:19:11.988 "trsvcid": "54042" 00:19:11.988 }, 00:19:11.988 "auth": { 00:19:11.988 "state": "completed", 00:19:11.988 "digest": "sha256", 00:19:11.988 "dhgroup": "ffdhe4096" 00:19:11.988 } 00:19:11.988 } 00:19:11.988 ]' 00:19:11.988 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.246 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.246 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.246 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.246 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.246 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.246 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.246 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.504 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:19:13.445 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.445 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.445 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.445 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.445 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.445 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.445 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.445 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.711 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.278 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.278 { 00:19:14.278 "cntlid": 29, 00:19:14.278 "qid": 0, 00:19:14.278 "state": "enabled", 00:19:14.278 "thread": "nvmf_tgt_poll_group_000", 00:19:14.278 "listen_address": { 00:19:14.278 "trtype": "TCP", 00:19:14.278 "adrfam": "IPv4", 00:19:14.278 "traddr": "10.0.0.2", 00:19:14.278 "trsvcid": "4420" 00:19:14.278 }, 00:19:14.278 "peer_address": { 00:19:14.278 "trtype": "TCP", 00:19:14.278 "adrfam": "IPv4", 00:19:14.278 "traddr": "10.0.0.1", 00:19:14.278 "trsvcid": "42410" 00:19:14.278 }, 00:19:14.278 "auth": { 00:19:14.278 "state": "completed", 00:19:14.278 "digest": "sha256", 00:19:14.278 "dhgroup": "ffdhe4096" 00:19:14.278 } 00:19:14.278 } 00:19:14.278 ]' 00:19:14.278 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.537 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.537 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.537 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.537 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.537 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.537 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.537 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.796 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:19:15.730 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.730 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.730 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.730 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.730 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.730 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.730 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.730 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.988 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.246 00:19:16.506 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.506 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.506 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.772 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.773 { 00:19:16.773 "cntlid": 31, 00:19:16.773 "qid": 0, 00:19:16.773 "state": "enabled", 00:19:16.773 "thread": "nvmf_tgt_poll_group_000", 00:19:16.773 "listen_address": { 00:19:16.773 "trtype": "TCP", 00:19:16.773 "adrfam": "IPv4", 00:19:16.773 "traddr": "10.0.0.2", 00:19:16.773 "trsvcid": "4420" 00:19:16.773 }, 00:19:16.773 "peer_address": { 00:19:16.773 "trtype": "TCP", 00:19:16.773 "adrfam": "IPv4", 00:19:16.773 "traddr": "10.0.0.1", 00:19:16.773 "trsvcid": "42442" 00:19:16.773 }, 00:19:16.773 "auth": { 00:19:16.773 "state": "completed", 00:19:16.773 "digest": "sha256", 00:19:16.773 "dhgroup": "ffdhe4096" 00:19:16.773 } 00:19:16.773 } 00:19:16.773 ]' 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.773 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.032 06:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.968 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.227 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:18.227 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.227 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.227 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.227 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.227 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.228 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.228 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.228 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.228 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.228 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.228 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.794 00:19:18.794 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.794 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.794 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.051 { 00:19:19.051 "cntlid": 33, 00:19:19.051 "qid": 0, 00:19:19.051 "state": "enabled", 00:19:19.051 "thread": "nvmf_tgt_poll_group_000", 00:19:19.051 "listen_address": { 00:19:19.051 "trtype": "TCP", 00:19:19.051 "adrfam": "IPv4", 00:19:19.051 "traddr": "10.0.0.2", 00:19:19.051 "trsvcid": "4420" 00:19:19.051 }, 00:19:19.051 "peer_address": { 00:19:19.051 "trtype": "TCP", 00:19:19.051 "adrfam": "IPv4", 00:19:19.051 "traddr": "10.0.0.1", 00:19:19.051 "trsvcid": "42458" 00:19:19.051 }, 00:19:19.051 "auth": { 00:19:19.051 "state": "completed", 00:19:19.051 "digest": "sha256", 00:19:19.051 "dhgroup": "ffdhe6144" 00:19:19.051 } 00:19:19.051 } 00:19:19.051 ]' 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.051 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.310 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:19:20.245 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.245 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.245 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.245 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.504 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.504 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.504 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.504 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.763 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.332 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.332 { 00:19:21.332 "cntlid": 35, 00:19:21.332 "qid": 0, 00:19:21.332 "state": "enabled", 00:19:21.332 "thread": "nvmf_tgt_poll_group_000", 00:19:21.332 "listen_address": { 00:19:21.332 "trtype": "TCP", 00:19:21.332 "adrfam": "IPv4", 00:19:21.332 "traddr": "10.0.0.2", 00:19:21.332 "trsvcid": "4420" 00:19:21.332 }, 00:19:21.332 "peer_address": { 00:19:21.332 "trtype": "TCP", 00:19:21.332 "adrfam": "IPv4", 00:19:21.332 "traddr": "10.0.0.1", 00:19:21.332 "trsvcid": "42494" 00:19:21.332 }, 00:19:21.332 "auth": { 00:19:21.332 "state": "completed", 00:19:21.332 "digest": "sha256", 00:19:21.332 "dhgroup": "ffdhe6144" 00:19:21.332 } 00:19:21.332 } 00:19:21.332 ]' 00:19:21.332 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.590 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.590 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.590 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.590 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.590 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.590 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.590 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.848 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:19:22.785 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.785 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.785 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.785 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.785 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.785 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.785 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.785 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.043 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.608 00:19:23.608 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.609 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.609 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.866 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.866 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.866 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.866 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.866 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.866 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.866 { 00:19:23.866 "cntlid": 37, 00:19:23.866 "qid": 0, 00:19:23.866 "state": "enabled", 00:19:23.866 "thread": "nvmf_tgt_poll_group_000", 00:19:23.866 "listen_address": { 00:19:23.867 "trtype": "TCP", 00:19:23.867 "adrfam": "IPv4", 00:19:23.867 "traddr": "10.0.0.2", 00:19:23.867 "trsvcid": "4420" 00:19:23.867 }, 00:19:23.867 "peer_address": { 00:19:23.867 "trtype": "TCP", 00:19:23.867 "adrfam": "IPv4", 00:19:23.867 "traddr": "10.0.0.1", 00:19:23.867 "trsvcid": "54920" 00:19:23.867 }, 00:19:23.867 "auth": { 00:19:23.867 "state": "completed", 00:19:23.867 "digest": "sha256", 00:19:23.867 "dhgroup": "ffdhe6144" 00:19:23.867 } 00:19:23.867 } 00:19:23.867 ]' 00:19:23.867 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.867 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.867 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.867 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.867 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.125 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.125 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.125 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.382 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:19:25.315 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.315 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.315 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.315 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.315 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.315 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.315 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.315 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.574 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.140 00:19:26.140 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.140 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.141 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.399 { 00:19:26.399 "cntlid": 39, 00:19:26.399 "qid": 0, 00:19:26.399 "state": "enabled", 00:19:26.399 "thread": "nvmf_tgt_poll_group_000", 00:19:26.399 "listen_address": { 00:19:26.399 "trtype": "TCP", 00:19:26.399 "adrfam": "IPv4", 00:19:26.399 "traddr": "10.0.0.2", 00:19:26.399 "trsvcid": "4420" 00:19:26.399 }, 00:19:26.399 "peer_address": { 00:19:26.399 "trtype": "TCP", 00:19:26.399 "adrfam": "IPv4", 00:19:26.399 "traddr": "10.0.0.1", 00:19:26.399 "trsvcid": "54954" 00:19:26.399 }, 00:19:26.399 "auth": { 00:19:26.399 "state": "completed", 00:19:26.399 "digest": "sha256", 00:19:26.399 "dhgroup": "ffdhe6144" 00:19:26.399 } 00:19:26.399 } 00:19:26.399 ]' 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.399 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.657 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.592 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.851 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.785 00:19:28.785 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.785 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.785 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.044 { 00:19:29.044 "cntlid": 41, 00:19:29.044 "qid": 0, 00:19:29.044 "state": "enabled", 00:19:29.044 "thread": "nvmf_tgt_poll_group_000", 00:19:29.044 "listen_address": { 00:19:29.044 "trtype": "TCP", 00:19:29.044 "adrfam": "IPv4", 00:19:29.044 "traddr": "10.0.0.2", 00:19:29.044 "trsvcid": "4420" 00:19:29.044 }, 00:19:29.044 "peer_address": { 00:19:29.044 "trtype": "TCP", 00:19:29.044 "adrfam": "IPv4", 00:19:29.044 "traddr": "10.0.0.1", 00:19:29.044 "trsvcid": "54976" 00:19:29.044 }, 00:19:29.044 "auth": { 00:19:29.044 "state": "completed", 00:19:29.044 "digest": "sha256", 00:19:29.044 "dhgroup": "ffdhe8192" 00:19:29.044 } 00:19:29.044 } 00:19:29.044 ]' 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.044 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.302 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.675 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.606 00:19:31.606 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.606 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.606 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.863 { 00:19:31.863 "cntlid": 43, 00:19:31.863 "qid": 0, 00:19:31.863 "state": "enabled", 00:19:31.863 "thread": "nvmf_tgt_poll_group_000", 00:19:31.863 "listen_address": { 00:19:31.863 "trtype": "TCP", 00:19:31.863 "adrfam": "IPv4", 00:19:31.863 "traddr": "10.0.0.2", 00:19:31.863 "trsvcid": "4420" 00:19:31.863 }, 00:19:31.863 "peer_address": { 00:19:31.863 "trtype": "TCP", 00:19:31.863 "adrfam": "IPv4", 00:19:31.863 "traddr": "10.0.0.1", 00:19:31.863 "trsvcid": "55018" 00:19:31.863 }, 00:19:31.863 "auth": { 00:19:31.863 "state": "completed", 00:19:31.863 "digest": "sha256", 00:19:31.863 "dhgroup": "ffdhe8192" 00:19:31.863 } 00:19:31.863 } 00:19:31.863 ]' 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.863 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.864 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.121 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:19:33.055 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.055 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.055 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.055 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.055 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.055 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.055 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:33.055 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.313 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.248 00:19:34.248 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.248 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.248 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.506 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.506 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.506 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.506 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.506 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.506 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.506 { 00:19:34.506 "cntlid": 45, 00:19:34.506 "qid": 0, 00:19:34.506 "state": "enabled", 00:19:34.506 "thread": "nvmf_tgt_poll_group_000", 00:19:34.506 "listen_address": { 00:19:34.506 "trtype": "TCP", 00:19:34.506 "adrfam": "IPv4", 00:19:34.506 "traddr": "10.0.0.2", 00:19:34.506 "trsvcid": "4420" 00:19:34.506 }, 00:19:34.506 "peer_address": { 00:19:34.506 "trtype": "TCP", 00:19:34.506 "adrfam": "IPv4", 00:19:34.506 "traddr": "10.0.0.1", 00:19:34.506 "trsvcid": "33666" 00:19:34.506 }, 00:19:34.506 "auth": { 00:19:34.506 "state": "completed", 00:19:34.506 "digest": "sha256", 00:19:34.506 "dhgroup": "ffdhe8192" 00:19:34.506 } 00:19:34.507 } 00:19:34.507 ]' 00:19:34.507 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.507 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.507 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.765 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.765 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.765 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.765 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.765 06:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.022 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:19:35.957 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.957 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.957 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.957 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.957 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.957 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.957 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.957 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.215 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.149 00:19:37.149 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.149 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.149 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.407 { 00:19:37.407 "cntlid": 47, 00:19:37.407 "qid": 0, 00:19:37.407 "state": "enabled", 00:19:37.407 "thread": "nvmf_tgt_poll_group_000", 00:19:37.407 "listen_address": { 00:19:37.407 "trtype": "TCP", 00:19:37.407 "adrfam": "IPv4", 00:19:37.407 "traddr": "10.0.0.2", 00:19:37.407 "trsvcid": "4420" 00:19:37.407 }, 00:19:37.407 "peer_address": { 00:19:37.407 "trtype": "TCP", 00:19:37.407 "adrfam": "IPv4", 00:19:37.407 "traddr": "10.0.0.1", 00:19:37.407 "trsvcid": "33684" 00:19:37.407 }, 00:19:37.407 "auth": { 00:19:37.407 "state": "completed", 00:19:37.407 "digest": "sha256", 00:19:37.407 "dhgroup": "ffdhe8192" 00:19:37.407 } 00:19:37.407 } 00:19:37.407 ]' 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.407 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.665 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.665 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.665 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.665 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.665 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.921 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.854 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.113 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.375 00:19:39.375 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.375 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.375 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.633 { 00:19:39.633 "cntlid": 49, 00:19:39.633 "qid": 0, 00:19:39.633 "state": "enabled", 00:19:39.633 "thread": "nvmf_tgt_poll_group_000", 00:19:39.633 "listen_address": { 00:19:39.633 "trtype": "TCP", 00:19:39.633 "adrfam": "IPv4", 00:19:39.633 "traddr": "10.0.0.2", 00:19:39.633 "trsvcid": "4420" 00:19:39.633 }, 00:19:39.633 "peer_address": { 00:19:39.633 "trtype": "TCP", 00:19:39.633 "adrfam": "IPv4", 00:19:39.633 "traddr": "10.0.0.1", 00:19:39.633 "trsvcid": "33714" 00:19:39.633 }, 00:19:39.633 "auth": { 00:19:39.633 "state": "completed", 00:19:39.633 "digest": "sha384", 00:19:39.633 "dhgroup": "null" 00:19:39.633 } 00:19:39.633 } 00:19:39.633 ]' 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.633 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.891 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:19:40.821 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.821 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.821 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.079 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.079 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.079 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.079 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.079 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.336 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.592 00:19:41.592 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.592 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.592 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.848 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.849 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.849 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.849 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.849 { 00:19:41.849 "cntlid": 51, 00:19:41.849 "qid": 0, 00:19:41.849 "state": "enabled", 00:19:41.849 "thread": "nvmf_tgt_poll_group_000", 00:19:41.849 "listen_address": { 00:19:41.849 "trtype": "TCP", 00:19:41.849 "adrfam": "IPv4", 00:19:41.849 "traddr": "10.0.0.2", 00:19:41.849 "trsvcid": "4420" 00:19:41.849 }, 00:19:41.849 "peer_address": { 00:19:41.849 "trtype": "TCP", 00:19:41.849 "adrfam": "IPv4", 00:19:41.849 "traddr": "10.0.0.1", 00:19:41.849 "trsvcid": "33756" 00:19:41.849 }, 00:19:41.849 "auth": { 00:19:41.849 "state": "completed", 00:19:41.849 "digest": "sha384", 00:19:41.849 "dhgroup": "null" 00:19:41.849 } 00:19:41.849 } 00:19:41.849 ]' 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.849 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.105 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:19:43.037 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.037 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.037 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.037 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.037 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.037 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.037 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:43.037 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:43.294 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.295 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.860 00:19:43.860 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.860 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.860 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.860 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.860 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.860 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.860 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.860 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.860 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.860 { 00:19:43.860 "cntlid": 53, 00:19:43.860 "qid": 0, 00:19:43.860 "state": "enabled", 00:19:43.860 "thread": "nvmf_tgt_poll_group_000", 00:19:43.860 "listen_address": { 00:19:43.860 "trtype": "TCP", 00:19:43.860 "adrfam": "IPv4", 00:19:43.860 "traddr": "10.0.0.2", 00:19:43.860 "trsvcid": "4420" 00:19:43.860 }, 00:19:43.860 "peer_address": { 00:19:43.860 "trtype": "TCP", 00:19:43.860 "adrfam": "IPv4", 00:19:43.860 "traddr": "10.0.0.1", 00:19:43.860 "trsvcid": "35004" 00:19:43.860 }, 00:19:43.860 "auth": { 00:19:43.860 "state": "completed", 00:19:43.860 "digest": "sha384", 00:19:43.860 "dhgroup": "null" 00:19:43.860 } 00:19:43.860 } 00:19:43.860 ]' 00:19:43.860 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.118 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.118 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.118 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:44.118 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.118 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.118 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.118 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.376 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:19:45.308 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.308 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.308 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.308 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.308 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.308 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.308 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:45.308 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.566 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.824 00:19:45.824 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.824 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.824 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.081 { 00:19:46.081 "cntlid": 55, 00:19:46.081 "qid": 0, 00:19:46.081 "state": "enabled", 00:19:46.081 "thread": "nvmf_tgt_poll_group_000", 00:19:46.081 "listen_address": { 00:19:46.081 "trtype": "TCP", 00:19:46.081 "adrfam": "IPv4", 00:19:46.081 "traddr": "10.0.0.2", 00:19:46.081 "trsvcid": "4420" 00:19:46.081 }, 00:19:46.081 "peer_address": { 00:19:46.081 "trtype": "TCP", 00:19:46.081 "adrfam": "IPv4", 00:19:46.081 "traddr": "10.0.0.1", 00:19:46.081 "trsvcid": "35012" 00:19:46.081 }, 00:19:46.081 "auth": { 00:19:46.081 "state": "completed", 00:19:46.081 "digest": "sha384", 00:19:46.081 "dhgroup": "null" 00:19:46.081 } 00:19:46.081 } 00:19:46.081 ]' 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.081 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.339 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.710 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.967 00:19:47.967 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.967 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.967 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.224 { 00:19:48.224 "cntlid": 57, 00:19:48.224 "qid": 0, 00:19:48.224 "state": "enabled", 00:19:48.224 "thread": "nvmf_tgt_poll_group_000", 00:19:48.224 "listen_address": { 00:19:48.224 "trtype": "TCP", 00:19:48.224 "adrfam": "IPv4", 00:19:48.224 "traddr": "10.0.0.2", 00:19:48.224 "trsvcid": "4420" 00:19:48.224 }, 00:19:48.224 "peer_address": { 00:19:48.224 "trtype": "TCP", 00:19:48.224 "adrfam": "IPv4", 00:19:48.224 "traddr": "10.0.0.1", 00:19:48.224 "trsvcid": "35040" 00:19:48.224 }, 00:19:48.224 "auth": { 00:19:48.224 "state": "completed", 00:19:48.224 "digest": "sha384", 00:19:48.224 "dhgroup": "ffdhe2048" 00:19:48.224 } 00:19:48.224 } 00:19:48.224 ]' 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.224 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.482 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.482 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.482 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.740 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:19:49.673 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.673 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.673 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.673 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.673 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.673 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.673 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:49.673 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.931 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.189 00:19:50.189 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.189 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.189 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.447 { 00:19:50.447 "cntlid": 59, 00:19:50.447 "qid": 0, 00:19:50.447 "state": "enabled", 00:19:50.447 "thread": "nvmf_tgt_poll_group_000", 00:19:50.447 "listen_address": { 00:19:50.447 "trtype": "TCP", 00:19:50.447 "adrfam": "IPv4", 00:19:50.447 "traddr": "10.0.0.2", 00:19:50.447 "trsvcid": "4420" 00:19:50.447 }, 00:19:50.447 "peer_address": { 00:19:50.447 "trtype": "TCP", 00:19:50.447 "adrfam": "IPv4", 00:19:50.447 "traddr": "10.0.0.1", 00:19:50.447 "trsvcid": "35084" 00:19:50.447 }, 00:19:50.447 "auth": { 00:19:50.447 "state": "completed", 00:19:50.447 "digest": "sha384", 00:19:50.447 "dhgroup": "ffdhe2048" 00:19:50.447 } 00:19:50.447 } 00:19:50.447 ]' 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.447 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.705 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:19:51.640 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.640 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.640 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.640 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.640 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.640 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.640 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.640 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.910 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.191 00:19:52.191 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.191 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.191 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.449 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.707 { 00:19:52.707 "cntlid": 61, 00:19:52.707 "qid": 0, 00:19:52.707 "state": "enabled", 00:19:52.707 "thread": "nvmf_tgt_poll_group_000", 00:19:52.707 "listen_address": { 00:19:52.707 "trtype": "TCP", 00:19:52.707 "adrfam": "IPv4", 00:19:52.707 "traddr": "10.0.0.2", 00:19:52.707 "trsvcid": "4420" 00:19:52.707 }, 00:19:52.707 "peer_address": { 00:19:52.707 "trtype": "TCP", 00:19:52.707 "adrfam": "IPv4", 00:19:52.707 "traddr": "10.0.0.1", 00:19:52.707 "trsvcid": "35118" 00:19:52.707 }, 00:19:52.707 "auth": { 00:19:52.707 "state": "completed", 00:19:52.707 "digest": "sha384", 00:19:52.707 "dhgroup": "ffdhe2048" 00:19:52.707 } 00:19:52.707 } 00:19:52.707 ]' 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.707 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.964 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:19:53.897 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.897 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.897 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.897 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.897 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.897 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.897 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.897 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.155 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:54.155 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.155 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.155 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:54.155 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.155 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.156 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:54.156 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.156 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.156 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.156 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.156 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.413 00:19:54.413 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.413 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.414 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.672 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.672 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.672 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.672 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.672 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.672 { 00:19:54.672 "cntlid": 63, 00:19:54.672 "qid": 0, 00:19:54.672 "state": "enabled", 00:19:54.672 "thread": "nvmf_tgt_poll_group_000", 00:19:54.672 "listen_address": { 00:19:54.672 "trtype": "TCP", 00:19:54.672 "adrfam": "IPv4", 00:19:54.672 "traddr": "10.0.0.2", 00:19:54.672 "trsvcid": "4420" 00:19:54.672 }, 00:19:54.672 "peer_address": { 00:19:54.672 "trtype": "TCP", 00:19:54.672 "adrfam": "IPv4", 00:19:54.672 "traddr": "10.0.0.1", 00:19:54.672 "trsvcid": "48630" 00:19:54.672 }, 00:19:54.672 "auth": { 00:19:54.672 "state": "completed", 00:19:54.672 "digest": "sha384", 00:19:54.672 "dhgroup": "ffdhe2048" 00:19:54.672 } 00:19:54.672 } 00:19:54.672 ]' 00:19:54.672 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.672 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.672 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.930 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.930 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.930 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.930 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.930 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.187 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.121 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.379 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.637 00:19:56.637 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.637 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.637 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.895 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.895 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.895 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.895 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.154 { 00:19:57.154 "cntlid": 65, 00:19:57.154 "qid": 0, 00:19:57.154 "state": "enabled", 00:19:57.154 "thread": "nvmf_tgt_poll_group_000", 00:19:57.154 "listen_address": { 00:19:57.154 "trtype": "TCP", 00:19:57.154 "adrfam": "IPv4", 00:19:57.154 "traddr": "10.0.0.2", 00:19:57.154 "trsvcid": "4420" 00:19:57.154 }, 00:19:57.154 "peer_address": { 00:19:57.154 "trtype": "TCP", 00:19:57.154 "adrfam": "IPv4", 00:19:57.154 "traddr": "10.0.0.1", 00:19:57.154 "trsvcid": "48650" 00:19:57.154 }, 00:19:57.154 "auth": { 00:19:57.154 "state": "completed", 00:19:57.154 "digest": "sha384", 00:19:57.154 "dhgroup": "ffdhe3072" 00:19:57.154 } 00:19:57.154 } 00:19:57.154 ]' 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.154 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.412 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:19:58.345 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.345 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.345 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.345 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.345 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.345 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.345 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:58.345 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.603 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.861 00:19:58.861 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.861 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.861 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.118 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.118 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.118 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.118 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.118 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.118 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.118 { 00:19:59.118 "cntlid": 67, 00:19:59.118 "qid": 0, 00:19:59.118 "state": "enabled", 00:19:59.118 "thread": "nvmf_tgt_poll_group_000", 00:19:59.118 "listen_address": { 00:19:59.118 "trtype": "TCP", 00:19:59.118 "adrfam": "IPv4", 00:19:59.118 "traddr": "10.0.0.2", 00:19:59.118 "trsvcid": "4420" 00:19:59.118 }, 00:19:59.119 "peer_address": { 00:19:59.119 "trtype": "TCP", 00:19:59.119 "adrfam": "IPv4", 00:19:59.119 "traddr": "10.0.0.1", 00:19:59.119 "trsvcid": "48688" 00:19:59.119 }, 00:19:59.119 "auth": { 00:19:59.119 "state": "completed", 00:19:59.119 "digest": "sha384", 00:19:59.119 "dhgroup": "ffdhe3072" 00:19:59.119 } 00:19:59.119 } 00:19:59.119 ]' 00:19:59.119 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.377 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.377 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.377 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.377 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.377 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.377 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.377 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.634 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:20:00.564 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.564 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.564 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.564 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.564 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.564 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.564 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:00.564 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.822 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.080 00:20:01.080 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.080 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.080 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.338 { 00:20:01.338 "cntlid": 69, 00:20:01.338 "qid": 0, 00:20:01.338 "state": "enabled", 00:20:01.338 "thread": "nvmf_tgt_poll_group_000", 00:20:01.338 "listen_address": { 00:20:01.338 "trtype": "TCP", 00:20:01.338 "adrfam": "IPv4", 00:20:01.338 "traddr": "10.0.0.2", 00:20:01.338 "trsvcid": "4420" 00:20:01.338 }, 00:20:01.338 "peer_address": { 00:20:01.338 "trtype": "TCP", 00:20:01.338 "adrfam": "IPv4", 00:20:01.338 "traddr": "10.0.0.1", 00:20:01.338 "trsvcid": "48726" 00:20:01.338 }, 00:20:01.338 "auth": { 00:20:01.338 "state": "completed", 00:20:01.338 "digest": "sha384", 00:20:01.338 "dhgroup": "ffdhe3072" 00:20:01.338 } 00:20:01.338 } 00:20:01.338 ]' 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.338 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.596 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.596 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.596 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.596 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.596 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.853 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:20:02.787 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.787 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.787 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.787 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.787 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.787 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.787 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.787 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.045 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.303 00:20:03.303 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.303 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.303 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.562 { 00:20:03.562 "cntlid": 71, 00:20:03.562 "qid": 0, 00:20:03.562 "state": "enabled", 00:20:03.562 "thread": "nvmf_tgt_poll_group_000", 00:20:03.562 "listen_address": { 00:20:03.562 "trtype": "TCP", 00:20:03.562 "adrfam": "IPv4", 00:20:03.562 "traddr": "10.0.0.2", 00:20:03.562 "trsvcid": "4420" 00:20:03.562 }, 00:20:03.562 "peer_address": { 00:20:03.562 "trtype": "TCP", 00:20:03.562 "adrfam": "IPv4", 00:20:03.562 "traddr": "10.0.0.1", 00:20:03.562 "trsvcid": "48756" 00:20:03.562 }, 00:20:03.562 "auth": { 00:20:03.562 "state": "completed", 00:20:03.562 "digest": "sha384", 00:20:03.562 "dhgroup": "ffdhe3072" 00:20:03.562 } 00:20:03.562 } 00:20:03.562 ]' 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.562 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.820 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.820 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.820 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.080 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.017 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.276 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.534 00:20:05.534 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.534 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.535 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.792 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.793 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.793 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.793 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.793 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.793 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.793 { 00:20:05.793 "cntlid": 73, 00:20:05.793 "qid": 0, 00:20:05.793 "state": "enabled", 00:20:05.793 "thread": "nvmf_tgt_poll_group_000", 00:20:05.793 "listen_address": { 00:20:05.793 "trtype": "TCP", 00:20:05.793 "adrfam": "IPv4", 00:20:05.793 "traddr": "10.0.0.2", 00:20:05.793 "trsvcid": "4420" 00:20:05.793 }, 00:20:05.793 "peer_address": { 00:20:05.793 "trtype": "TCP", 00:20:05.793 "adrfam": "IPv4", 00:20:05.793 "traddr": "10.0.0.1", 00:20:05.793 "trsvcid": "49062" 00:20:05.793 }, 00:20:05.793 "auth": { 00:20:05.793 "state": "completed", 00:20:05.793 "digest": "sha384", 00:20:05.793 "dhgroup": "ffdhe4096" 00:20:05.793 } 00:20:05.793 } 00:20:05.793 ]' 00:20:05.793 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.793 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.793 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.052 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.052 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.052 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.052 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.052 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.311 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:20:07.244 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.244 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.244 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.244 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.244 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.244 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.244 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.244 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.502 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.760 00:20:07.760 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.760 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.760 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.018 { 00:20:08.018 "cntlid": 75, 00:20:08.018 "qid": 0, 00:20:08.018 "state": "enabled", 00:20:08.018 "thread": "nvmf_tgt_poll_group_000", 00:20:08.018 "listen_address": { 00:20:08.018 "trtype": "TCP", 00:20:08.018 "adrfam": "IPv4", 00:20:08.018 "traddr": "10.0.0.2", 00:20:08.018 "trsvcid": "4420" 00:20:08.018 }, 00:20:08.018 "peer_address": { 00:20:08.018 "trtype": "TCP", 00:20:08.018 "adrfam": "IPv4", 00:20:08.018 "traddr": "10.0.0.1", 00:20:08.018 "trsvcid": "49098" 00:20:08.018 }, 00:20:08.018 "auth": { 00:20:08.018 "state": "completed", 00:20:08.018 "digest": "sha384", 00:20:08.018 "dhgroup": "ffdhe4096" 00:20:08.018 } 00:20:08.018 } 00:20:08.018 ]' 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.018 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.275 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.275 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.275 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.275 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.275 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.532 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:20:09.463 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.464 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.464 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.464 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.464 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.464 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.464 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.464 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.721 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.978 00:20:09.978 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.978 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.979 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.236 { 00:20:10.236 "cntlid": 77, 00:20:10.236 "qid": 0, 00:20:10.236 "state": "enabled", 00:20:10.236 "thread": "nvmf_tgt_poll_group_000", 00:20:10.236 "listen_address": { 00:20:10.236 "trtype": "TCP", 00:20:10.236 "adrfam": "IPv4", 00:20:10.236 "traddr": "10.0.0.2", 00:20:10.236 "trsvcid": "4420" 00:20:10.236 }, 00:20:10.236 "peer_address": { 00:20:10.236 "trtype": "TCP", 00:20:10.236 "adrfam": "IPv4", 00:20:10.236 "traddr": "10.0.0.1", 00:20:10.236 "trsvcid": "49136" 00:20:10.236 }, 00:20:10.236 "auth": { 00:20:10.236 "state": "completed", 00:20:10.236 "digest": "sha384", 00:20:10.236 "dhgroup": "ffdhe4096" 00:20:10.236 } 00:20:10.236 } 00:20:10.236 ]' 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.236 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.493 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.493 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.493 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.493 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.493 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.750 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:20:11.683 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.683 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.683 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.683 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.683 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.683 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.683 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.683 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.941 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.199 00:20:12.457 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.457 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.457 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.457 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.715 { 00:20:12.715 "cntlid": 79, 00:20:12.715 "qid": 0, 00:20:12.715 "state": "enabled", 00:20:12.715 "thread": "nvmf_tgt_poll_group_000", 00:20:12.715 "listen_address": { 00:20:12.715 "trtype": "TCP", 00:20:12.715 "adrfam": "IPv4", 00:20:12.715 "traddr": "10.0.0.2", 00:20:12.715 "trsvcid": "4420" 00:20:12.715 }, 00:20:12.715 "peer_address": { 00:20:12.715 "trtype": "TCP", 00:20:12.715 "adrfam": "IPv4", 00:20:12.715 "traddr": "10.0.0.1", 00:20:12.715 "trsvcid": "49164" 00:20:12.715 }, 00:20:12.715 "auth": { 00:20:12.715 "state": "completed", 00:20:12.715 "digest": "sha384", 00:20:12.715 "dhgroup": "ffdhe4096" 00:20:12.715 } 00:20:12.715 } 00:20:12.715 ]' 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.715 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.973 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.907 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.166 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.731 00:20:14.731 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.731 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.731 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.989 { 00:20:14.989 "cntlid": 81, 00:20:14.989 "qid": 0, 00:20:14.989 "state": "enabled", 00:20:14.989 "thread": "nvmf_tgt_poll_group_000", 00:20:14.989 "listen_address": { 00:20:14.989 "trtype": "TCP", 00:20:14.989 "adrfam": "IPv4", 00:20:14.989 "traddr": "10.0.0.2", 00:20:14.989 "trsvcid": "4420" 00:20:14.989 }, 00:20:14.989 "peer_address": { 00:20:14.989 "trtype": "TCP", 00:20:14.989 "adrfam": "IPv4", 00:20:14.989 "traddr": "10.0.0.1", 00:20:14.989 "trsvcid": "51616" 00:20:14.989 }, 00:20:14.989 "auth": { 00:20:14.989 "state": "completed", 00:20:14.989 "digest": "sha384", 00:20:14.989 "dhgroup": "ffdhe6144" 00:20:14.989 } 00:20:14.989 } 00:20:14.989 ]' 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.989 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.248 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.248 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.248 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.506 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:20:16.439 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.439 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.439 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.439 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.439 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.439 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.439 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.439 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.722 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.723 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.723 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.306 00:20:17.306 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.306 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.306 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.565 { 00:20:17.565 "cntlid": 83, 00:20:17.565 "qid": 0, 00:20:17.565 "state": "enabled", 00:20:17.565 "thread": "nvmf_tgt_poll_group_000", 00:20:17.565 "listen_address": { 00:20:17.565 "trtype": "TCP", 00:20:17.565 "adrfam": "IPv4", 00:20:17.565 "traddr": "10.0.0.2", 00:20:17.565 "trsvcid": "4420" 00:20:17.565 }, 00:20:17.565 "peer_address": { 00:20:17.565 "trtype": "TCP", 00:20:17.565 "adrfam": "IPv4", 00:20:17.565 "traddr": "10.0.0.1", 00:20:17.565 "trsvcid": "51636" 00:20:17.565 }, 00:20:17.565 "auth": { 00:20:17.565 "state": "completed", 00:20:17.565 "digest": "sha384", 00:20:17.565 "dhgroup": "ffdhe6144" 00:20:17.565 } 00:20:17.565 } 00:20:17.565 ]' 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.565 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.823 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.198 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.764 00:20:19.764 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.764 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.764 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.022 { 00:20:20.022 "cntlid": 85, 00:20:20.022 "qid": 0, 00:20:20.022 "state": "enabled", 00:20:20.022 "thread": "nvmf_tgt_poll_group_000", 00:20:20.022 "listen_address": { 00:20:20.022 "trtype": "TCP", 00:20:20.022 "adrfam": "IPv4", 00:20:20.022 "traddr": "10.0.0.2", 00:20:20.022 "trsvcid": "4420" 00:20:20.022 }, 00:20:20.022 "peer_address": { 00:20:20.022 "trtype": "TCP", 00:20:20.022 "adrfam": "IPv4", 00:20:20.022 "traddr": "10.0.0.1", 00:20:20.022 "trsvcid": "51668" 00:20:20.022 }, 00:20:20.022 "auth": { 00:20:20.022 "state": "completed", 00:20:20.022 "digest": "sha384", 00:20:20.022 "dhgroup": "ffdhe6144" 00:20:20.022 } 00:20:20.022 } 00:20:20.022 ]' 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.022 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.280 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.653 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.220 00:20:22.220 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.220 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.220 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.478 { 00:20:22.478 "cntlid": 87, 00:20:22.478 "qid": 0, 00:20:22.478 "state": "enabled", 00:20:22.478 "thread": "nvmf_tgt_poll_group_000", 00:20:22.478 "listen_address": { 00:20:22.478 "trtype": "TCP", 00:20:22.478 "adrfam": "IPv4", 00:20:22.478 "traddr": "10.0.0.2", 00:20:22.478 "trsvcid": "4420" 00:20:22.478 }, 00:20:22.478 "peer_address": { 00:20:22.478 "trtype": "TCP", 00:20:22.478 "adrfam": "IPv4", 00:20:22.478 "traddr": "10.0.0.1", 00:20:22.478 "trsvcid": "51696" 00:20:22.478 }, 00:20:22.478 "auth": { 00:20:22.478 "state": "completed", 00:20:22.478 "digest": "sha384", 00:20:22.478 "dhgroup": "ffdhe6144" 00:20:22.478 } 00:20:22.478 } 00:20:22.478 ]' 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.478 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.736 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.736 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.736 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.994 06:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.929 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.187 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.121 00:20:25.121 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.121 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.121 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.379 { 00:20:25.379 "cntlid": 89, 00:20:25.379 "qid": 0, 00:20:25.379 "state": "enabled", 00:20:25.379 "thread": "nvmf_tgt_poll_group_000", 00:20:25.379 "listen_address": { 00:20:25.379 "trtype": "TCP", 00:20:25.379 "adrfam": "IPv4", 00:20:25.379 "traddr": "10.0.0.2", 00:20:25.379 "trsvcid": "4420" 00:20:25.379 }, 00:20:25.379 "peer_address": { 00:20:25.379 "trtype": "TCP", 00:20:25.379 "adrfam": "IPv4", 00:20:25.379 "traddr": "10.0.0.1", 00:20:25.379 "trsvcid": "43526" 00:20:25.379 }, 00:20:25.379 "auth": { 00:20:25.379 "state": "completed", 00:20:25.379 "digest": "sha384", 00:20:25.379 "dhgroup": "ffdhe8192" 00:20:25.379 } 00:20:25.379 } 00:20:25.379 ]' 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.379 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.637 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.637 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.637 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.895 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:20:26.827 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.827 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.827 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.827 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.827 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.827 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.827 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.827 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.085 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.019 00:20:28.019 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.019 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.019 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.276 { 00:20:28.276 "cntlid": 91, 00:20:28.276 "qid": 0, 00:20:28.276 "state": "enabled", 00:20:28.276 "thread": "nvmf_tgt_poll_group_000", 00:20:28.276 "listen_address": { 00:20:28.276 "trtype": "TCP", 00:20:28.276 "adrfam": "IPv4", 00:20:28.276 "traddr": "10.0.0.2", 00:20:28.276 "trsvcid": "4420" 00:20:28.276 }, 00:20:28.276 "peer_address": { 00:20:28.276 "trtype": "TCP", 00:20:28.276 "adrfam": "IPv4", 00:20:28.276 "traddr": "10.0.0.1", 00:20:28.276 "trsvcid": "43550" 00:20:28.276 }, 00:20:28.276 "auth": { 00:20:28.276 "state": "completed", 00:20:28.276 "digest": "sha384", 00:20:28.276 "dhgroup": "ffdhe8192" 00:20:28.276 } 00:20:28.276 } 00:20:28.276 ]' 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.276 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.534 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:20:29.906 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.906 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.906 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.906 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.906 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.906 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.906 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.906 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.848 00:20:30.848 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.848 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.849 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.106 { 00:20:31.106 "cntlid": 93, 00:20:31.106 "qid": 0, 00:20:31.106 "state": "enabled", 00:20:31.106 "thread": "nvmf_tgt_poll_group_000", 00:20:31.106 "listen_address": { 00:20:31.106 "trtype": "TCP", 00:20:31.106 "adrfam": "IPv4", 00:20:31.106 "traddr": "10.0.0.2", 00:20:31.106 "trsvcid": "4420" 00:20:31.106 }, 00:20:31.106 "peer_address": { 00:20:31.106 "trtype": "TCP", 00:20:31.106 "adrfam": "IPv4", 00:20:31.106 "traddr": "10.0.0.1", 00:20:31.106 "trsvcid": "43570" 00:20:31.106 }, 00:20:31.106 "auth": { 00:20:31.106 "state": "completed", 00:20:31.106 "digest": "sha384", 00:20:31.106 "dhgroup": "ffdhe8192" 00:20:31.106 } 00:20:31.106 } 00:20:31.106 ]' 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.106 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.364 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:20:32.295 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.295 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.295 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.295 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.295 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.295 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.296 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.296 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.553 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.490 00:20:33.490 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.490 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.490 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.747 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.747 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.747 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.748 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.748 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.748 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.748 { 00:20:33.748 "cntlid": 95, 00:20:33.748 "qid": 0, 00:20:33.748 "state": "enabled", 00:20:33.748 "thread": "nvmf_tgt_poll_group_000", 00:20:33.748 "listen_address": { 00:20:33.748 "trtype": "TCP", 00:20:33.748 "adrfam": "IPv4", 00:20:33.748 "traddr": "10.0.0.2", 00:20:33.748 "trsvcid": "4420" 00:20:33.748 }, 00:20:33.748 "peer_address": { 00:20:33.748 "trtype": "TCP", 00:20:33.748 "adrfam": "IPv4", 00:20:33.748 "traddr": "10.0.0.1", 00:20:33.748 "trsvcid": "43582" 00:20:33.748 }, 00:20:33.748 "auth": { 00:20:33.748 "state": "completed", 00:20:33.748 "digest": "sha384", 00:20:33.748 "dhgroup": "ffdhe8192" 00:20:33.748 } 00:20:33.748 } 00:20:33.748 ]' 00:20:33.748 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.748 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.748 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.748 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.748 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.748 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.748 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.748 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.006 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:34.938 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.196 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.197 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.197 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.197 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.762 00:20:35.762 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.762 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.762 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.020 { 00:20:36.020 "cntlid": 97, 00:20:36.020 "qid": 0, 00:20:36.020 "state": "enabled", 00:20:36.020 "thread": "nvmf_tgt_poll_group_000", 00:20:36.020 "listen_address": { 00:20:36.020 "trtype": "TCP", 00:20:36.020 "adrfam": "IPv4", 00:20:36.020 "traddr": "10.0.0.2", 00:20:36.020 "trsvcid": "4420" 00:20:36.020 }, 00:20:36.020 "peer_address": { 00:20:36.020 "trtype": "TCP", 00:20:36.020 "adrfam": "IPv4", 00:20:36.020 "traddr": "10.0.0.1", 00:20:36.020 "trsvcid": "35154" 00:20:36.020 }, 00:20:36.020 "auth": { 00:20:36.020 "state": "completed", 00:20:36.020 "digest": "sha512", 00:20:36.020 "dhgroup": "null" 00:20:36.020 } 00:20:36.020 } 00:20:36.020 ]' 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.020 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.278 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:20:37.211 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.211 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.211 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.211 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.212 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.212 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.212 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.212 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.470 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.728 00:20:37.986 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.986 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.986 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.244 { 00:20:38.244 "cntlid": 99, 00:20:38.244 "qid": 0, 00:20:38.244 "state": "enabled", 00:20:38.244 "thread": "nvmf_tgt_poll_group_000", 00:20:38.244 "listen_address": { 00:20:38.244 "trtype": "TCP", 00:20:38.244 "adrfam": "IPv4", 00:20:38.244 "traddr": "10.0.0.2", 00:20:38.244 "trsvcid": "4420" 00:20:38.244 }, 00:20:38.244 "peer_address": { 00:20:38.244 "trtype": "TCP", 00:20:38.244 "adrfam": "IPv4", 00:20:38.244 "traddr": "10.0.0.1", 00:20:38.244 "trsvcid": "35190" 00:20:38.244 }, 00:20:38.244 "auth": { 00:20:38.244 "state": "completed", 00:20:38.244 "digest": "sha512", 00:20:38.244 "dhgroup": "null" 00:20:38.244 } 00:20:38.244 } 00:20:38.244 ]' 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.244 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.502 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:20:39.435 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.436 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.436 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.436 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.436 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.436 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.436 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.436 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.000 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.257 00:20:40.257 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.257 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.257 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.514 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.514 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.514 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.514 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.514 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.514 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.514 { 00:20:40.514 "cntlid": 101, 00:20:40.514 "qid": 0, 00:20:40.514 "state": "enabled", 00:20:40.514 "thread": "nvmf_tgt_poll_group_000", 00:20:40.514 "listen_address": { 00:20:40.514 "trtype": "TCP", 00:20:40.514 "adrfam": "IPv4", 00:20:40.514 "traddr": "10.0.0.2", 00:20:40.514 "trsvcid": "4420" 00:20:40.514 }, 00:20:40.514 "peer_address": { 00:20:40.514 "trtype": "TCP", 00:20:40.514 "adrfam": "IPv4", 00:20:40.514 "traddr": "10.0.0.1", 00:20:40.514 "trsvcid": "35216" 00:20:40.514 }, 00:20:40.514 "auth": { 00:20:40.514 "state": "completed", 00:20:40.514 "digest": "sha512", 00:20:40.514 "dhgroup": "null" 00:20:40.514 } 00:20:40.514 } 00:20:40.514 ]' 00:20:40.514 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.514 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.515 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.515 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:40.515 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.515 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.515 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.515 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.772 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:20:41.704 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.961 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.961 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.961 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.961 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.961 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.961 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.961 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.219 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.477 00:20:42.477 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.477 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.477 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.737 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.738 { 00:20:42.738 "cntlid": 103, 00:20:42.738 "qid": 0, 00:20:42.738 "state": "enabled", 00:20:42.738 "thread": "nvmf_tgt_poll_group_000", 00:20:42.738 "listen_address": { 00:20:42.738 "trtype": "TCP", 00:20:42.738 "adrfam": "IPv4", 00:20:42.738 "traddr": "10.0.0.2", 00:20:42.738 "trsvcid": "4420" 00:20:42.738 }, 00:20:42.738 "peer_address": { 00:20:42.738 "trtype": "TCP", 00:20:42.738 "adrfam": "IPv4", 00:20:42.738 "traddr": "10.0.0.1", 00:20:42.738 "trsvcid": "35246" 00:20:42.738 }, 00:20:42.738 "auth": { 00:20:42.738 "state": "completed", 00:20:42.738 "digest": "sha512", 00:20:42.738 "dhgroup": "null" 00:20:42.738 } 00:20:42.738 } 00:20:42.738 ]' 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:42.738 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.738 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.738 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.738 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.009 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:20:43.941 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.941 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.941 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.941 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.941 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.941 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.942 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.942 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.942 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.200 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.458 00:20:44.716 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.716 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.716 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.716 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.716 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.716 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.716 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.974 { 00:20:44.974 "cntlid": 105, 00:20:44.974 "qid": 0, 00:20:44.974 "state": "enabled", 00:20:44.974 "thread": "nvmf_tgt_poll_group_000", 00:20:44.974 "listen_address": { 00:20:44.974 "trtype": "TCP", 00:20:44.974 "adrfam": "IPv4", 00:20:44.974 "traddr": "10.0.0.2", 00:20:44.974 "trsvcid": "4420" 00:20:44.974 }, 00:20:44.974 "peer_address": { 00:20:44.974 "trtype": "TCP", 00:20:44.974 "adrfam": "IPv4", 00:20:44.974 "traddr": "10.0.0.1", 00:20:44.974 "trsvcid": "35438" 00:20:44.974 }, 00:20:44.974 "auth": { 00:20:44.974 "state": "completed", 00:20:44.974 "digest": "sha512", 00:20:44.974 "dhgroup": "ffdhe2048" 00:20:44.974 } 00:20:44.974 } 00:20:44.974 ]' 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.974 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.232 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:20:46.166 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.166 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.166 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.166 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.166 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.166 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.166 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.166 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.423 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:46.423 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.423 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.424 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.681 00:20:46.681 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.681 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.681 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.939 { 00:20:46.939 "cntlid": 107, 00:20:46.939 "qid": 0, 00:20:46.939 "state": "enabled", 00:20:46.939 "thread": "nvmf_tgt_poll_group_000", 00:20:46.939 "listen_address": { 00:20:46.939 "trtype": "TCP", 00:20:46.939 "adrfam": "IPv4", 00:20:46.939 "traddr": "10.0.0.2", 00:20:46.939 "trsvcid": "4420" 00:20:46.939 }, 00:20:46.939 "peer_address": { 00:20:46.939 "trtype": "TCP", 00:20:46.939 "adrfam": "IPv4", 00:20:46.939 "traddr": "10.0.0.1", 00:20:46.939 "trsvcid": "35476" 00:20:46.939 }, 00:20:46.939 "auth": { 00:20:46.939 "state": "completed", 00:20:46.939 "digest": "sha512", 00:20:46.939 "dhgroup": "ffdhe2048" 00:20:46.939 } 00:20:46.939 } 00:20:46.939 ]' 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.939 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.197 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.197 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.197 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.197 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.197 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.455 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:20:48.390 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.390 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.390 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.390 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.390 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.390 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.390 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:48.390 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.648 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.906 00:20:48.906 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.906 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.906 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.164 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.164 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.164 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.164 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.164 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.164 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.164 { 00:20:49.164 "cntlid": 109, 00:20:49.164 "qid": 0, 00:20:49.164 "state": "enabled", 00:20:49.164 "thread": "nvmf_tgt_poll_group_000", 00:20:49.164 "listen_address": { 00:20:49.164 "trtype": "TCP", 00:20:49.164 "adrfam": "IPv4", 00:20:49.164 "traddr": "10.0.0.2", 00:20:49.164 "trsvcid": "4420" 00:20:49.164 }, 00:20:49.164 "peer_address": { 00:20:49.164 "trtype": "TCP", 00:20:49.164 "adrfam": "IPv4", 00:20:49.164 "traddr": "10.0.0.1", 00:20:49.164 "trsvcid": "35498" 00:20:49.164 }, 00:20:49.164 "auth": { 00:20:49.164 "state": "completed", 00:20:49.164 "digest": "sha512", 00:20:49.164 "dhgroup": "ffdhe2048" 00:20:49.164 } 00:20:49.164 } 00:20:49.164 ]' 00:20:49.164 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.421 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.421 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.421 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.421 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.421 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.421 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.422 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.678 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:20:50.608 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.608 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.608 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.608 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.608 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.608 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.608 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:50.609 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.866 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.123 00:20:51.123 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.123 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.123 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.381 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.381 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.381 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.381 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.638 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.638 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.638 { 00:20:51.638 "cntlid": 111, 00:20:51.638 "qid": 0, 00:20:51.638 "state": "enabled", 00:20:51.638 "thread": "nvmf_tgt_poll_group_000", 00:20:51.638 "listen_address": { 00:20:51.638 "trtype": "TCP", 00:20:51.638 "adrfam": "IPv4", 00:20:51.638 "traddr": "10.0.0.2", 00:20:51.638 "trsvcid": "4420" 00:20:51.638 }, 00:20:51.638 "peer_address": { 00:20:51.638 "trtype": "TCP", 00:20:51.638 "adrfam": "IPv4", 00:20:51.638 "traddr": "10.0.0.1", 00:20:51.638 "trsvcid": "35508" 00:20:51.638 }, 00:20:51.638 "auth": { 00:20:51.638 "state": "completed", 00:20:51.638 "digest": "sha512", 00:20:51.638 "dhgroup": "ffdhe2048" 00:20:51.638 } 00:20:51.638 } 00:20:51.638 ]' 00:20:51.639 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.639 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.639 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.639 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.639 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.639 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.639 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.639 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.897 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:20:52.829 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.829 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.829 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.830 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.830 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.830 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.830 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.830 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.830 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.087 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.345 00:20:53.345 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.345 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.345 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.603 { 00:20:53.603 "cntlid": 113, 00:20:53.603 "qid": 0, 00:20:53.603 "state": "enabled", 00:20:53.603 "thread": "nvmf_tgt_poll_group_000", 00:20:53.603 "listen_address": { 00:20:53.603 "trtype": "TCP", 00:20:53.603 "adrfam": "IPv4", 00:20:53.603 "traddr": "10.0.0.2", 00:20:53.603 "trsvcid": "4420" 00:20:53.603 }, 00:20:53.603 "peer_address": { 00:20:53.603 "trtype": "TCP", 00:20:53.603 "adrfam": "IPv4", 00:20:53.603 "traddr": "10.0.0.1", 00:20:53.603 "trsvcid": "35542" 00:20:53.603 }, 00:20:53.603 "auth": { 00:20:53.603 "state": "completed", 00:20:53.603 "digest": "sha512", 00:20:53.603 "dhgroup": "ffdhe3072" 00:20:53.603 } 00:20:53.603 } 00:20:53.603 ]' 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.603 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.860 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.860 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.860 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.860 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.860 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.118 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:20:55.050 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.050 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.050 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.050 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.050 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.050 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.050 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.050 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.343 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.601 00:20:55.601 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.601 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.601 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.859 { 00:20:55.859 "cntlid": 115, 00:20:55.859 "qid": 0, 00:20:55.859 "state": "enabled", 00:20:55.859 "thread": "nvmf_tgt_poll_group_000", 00:20:55.859 "listen_address": { 00:20:55.859 "trtype": "TCP", 00:20:55.859 "adrfam": "IPv4", 00:20:55.859 "traddr": "10.0.0.2", 00:20:55.859 "trsvcid": "4420" 00:20:55.859 }, 00:20:55.859 "peer_address": { 00:20:55.859 "trtype": "TCP", 00:20:55.859 "adrfam": "IPv4", 00:20:55.859 "traddr": "10.0.0.1", 00:20:55.859 "trsvcid": "40310" 00:20:55.859 }, 00:20:55.859 "auth": { 00:20:55.859 "state": "completed", 00:20:55.859 "digest": "sha512", 00:20:55.859 "dhgroup": "ffdhe3072" 00:20:55.859 } 00:20:55.859 } 00:20:55.859 ]' 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.859 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.117 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:20:57.049 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.049 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.049 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.049 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.049 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.049 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.049 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.049 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.308 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.874 00:20:57.874 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.874 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.874 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.874 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.874 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.874 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.874 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.132 { 00:20:58.132 "cntlid": 117, 00:20:58.132 "qid": 0, 00:20:58.132 "state": "enabled", 00:20:58.132 "thread": "nvmf_tgt_poll_group_000", 00:20:58.132 "listen_address": { 00:20:58.132 "trtype": "TCP", 00:20:58.132 "adrfam": "IPv4", 00:20:58.132 "traddr": "10.0.0.2", 00:20:58.132 "trsvcid": "4420" 00:20:58.132 }, 00:20:58.132 "peer_address": { 00:20:58.132 "trtype": "TCP", 00:20:58.132 "adrfam": "IPv4", 00:20:58.132 "traddr": "10.0.0.1", 00:20:58.132 "trsvcid": "40342" 00:20:58.132 }, 00:20:58.132 "auth": { 00:20:58.132 "state": "completed", 00:20:58.132 "digest": "sha512", 00:20:58.132 "dhgroup": "ffdhe3072" 00:20:58.132 } 00:20:58.132 } 00:20:58.132 ]' 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.132 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.389 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:20:59.322 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.322 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.322 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.322 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.322 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.322 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.323 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.323 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.581 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.839 00:20:59.839 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.839 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.839 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.098 { 00:21:00.098 "cntlid": 119, 00:21:00.098 "qid": 0, 00:21:00.098 "state": "enabled", 00:21:00.098 "thread": "nvmf_tgt_poll_group_000", 00:21:00.098 "listen_address": { 00:21:00.098 "trtype": "TCP", 00:21:00.098 "adrfam": "IPv4", 00:21:00.098 "traddr": "10.0.0.2", 00:21:00.098 "trsvcid": "4420" 00:21:00.098 }, 00:21:00.098 "peer_address": { 00:21:00.098 "trtype": "TCP", 00:21:00.098 "adrfam": "IPv4", 00:21:00.098 "traddr": "10.0.0.1", 00:21:00.098 "trsvcid": "40374" 00:21:00.098 }, 00:21:00.098 "auth": { 00:21:00.098 "state": "completed", 00:21:00.098 "digest": "sha512", 00:21:00.098 "dhgroup": "ffdhe3072" 00:21:00.098 } 00:21:00.098 } 00:21:00.098 ]' 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.098 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.356 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.356 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.356 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.356 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.356 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.614 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:21:01.547 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.547 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.547 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.547 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.548 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.548 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.548 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.548 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.548 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.806 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.372 00:21:02.372 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.372 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.372 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.630 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.630 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.630 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.630 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.631 { 00:21:02.631 "cntlid": 121, 00:21:02.631 "qid": 0, 00:21:02.631 "state": "enabled", 00:21:02.631 "thread": "nvmf_tgt_poll_group_000", 00:21:02.631 "listen_address": { 00:21:02.631 "trtype": "TCP", 00:21:02.631 "adrfam": "IPv4", 00:21:02.631 "traddr": "10.0.0.2", 00:21:02.631 "trsvcid": "4420" 00:21:02.631 }, 00:21:02.631 "peer_address": { 00:21:02.631 "trtype": "TCP", 00:21:02.631 "adrfam": "IPv4", 00:21:02.631 "traddr": "10.0.0.1", 00:21:02.631 "trsvcid": "40402" 00:21:02.631 }, 00:21:02.631 "auth": { 00:21:02.631 "state": "completed", 00:21:02.631 "digest": "sha512", 00:21:02.631 "dhgroup": "ffdhe4096" 00:21:02.631 } 00:21:02.631 } 00:21:02.631 ]' 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.888 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:21:03.822 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.822 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.822 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.822 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.822 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.822 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.081 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.339 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.339 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.339 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.596 00:21:04.596 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.596 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.597 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.854 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.854 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.854 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.854 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.854 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.854 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.854 { 00:21:04.854 "cntlid": 123, 00:21:04.854 "qid": 0, 00:21:04.854 "state": "enabled", 00:21:04.854 "thread": "nvmf_tgt_poll_group_000", 00:21:04.854 "listen_address": { 00:21:04.854 "trtype": "TCP", 00:21:04.854 "adrfam": "IPv4", 00:21:04.854 "traddr": "10.0.0.2", 00:21:04.854 "trsvcid": "4420" 00:21:04.854 }, 00:21:04.854 "peer_address": { 00:21:04.854 "trtype": "TCP", 00:21:04.854 "adrfam": "IPv4", 00:21:04.854 "traddr": "10.0.0.1", 00:21:04.854 "trsvcid": "56082" 00:21:04.854 }, 00:21:04.854 "auth": { 00:21:04.854 "state": "completed", 00:21:04.854 "digest": "sha512", 00:21:04.854 "dhgroup": "ffdhe4096" 00:21:04.854 } 00:21:04.854 } 00:21:04.854 ]' 00:21:04.854 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.113 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.113 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.113 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.113 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.113 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.113 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.113 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.371 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:21:06.304 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.304 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.304 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.304 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.304 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.304 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.304 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.304 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.562 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.820 00:21:07.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.336 { 00:21:07.336 "cntlid": 125, 00:21:07.336 "qid": 0, 00:21:07.336 "state": "enabled", 00:21:07.336 "thread": "nvmf_tgt_poll_group_000", 00:21:07.336 "listen_address": { 00:21:07.336 "trtype": "TCP", 00:21:07.336 "adrfam": "IPv4", 00:21:07.336 "traddr": "10.0.0.2", 00:21:07.336 "trsvcid": "4420" 00:21:07.336 }, 00:21:07.336 "peer_address": { 00:21:07.336 "trtype": "TCP", 00:21:07.336 "adrfam": "IPv4", 00:21:07.336 "traddr": "10.0.0.1", 00:21:07.336 "trsvcid": "56104" 00:21:07.336 }, 00:21:07.336 "auth": { 00:21:07.336 "state": "completed", 00:21:07.336 "digest": "sha512", 00:21:07.336 "dhgroup": "ffdhe4096" 00:21:07.336 } 00:21:07.336 } 00:21:07.336 ]' 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.336 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.337 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.595 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:21:08.531 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.531 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.531 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.531 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.531 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.531 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.531 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.531 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.097 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.355 00:21:09.355 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.355 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.355 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.613 { 00:21:09.613 "cntlid": 127, 00:21:09.613 "qid": 0, 00:21:09.613 "state": "enabled", 00:21:09.613 "thread": "nvmf_tgt_poll_group_000", 00:21:09.613 "listen_address": { 00:21:09.613 "trtype": "TCP", 00:21:09.613 "adrfam": "IPv4", 00:21:09.613 "traddr": "10.0.0.2", 00:21:09.613 "trsvcid": "4420" 00:21:09.613 }, 00:21:09.613 "peer_address": { 00:21:09.613 "trtype": "TCP", 00:21:09.613 "adrfam": "IPv4", 00:21:09.613 "traddr": "10.0.0.1", 00:21:09.613 "trsvcid": "56122" 00:21:09.613 }, 00:21:09.613 "auth": { 00:21:09.613 "state": "completed", 00:21:09.613 "digest": "sha512", 00:21:09.613 "dhgroup": "ffdhe4096" 00:21:09.613 } 00:21:09.613 } 00:21:09.613 ]' 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.613 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.871 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.871 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.871 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.871 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.871 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.134 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.069 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.327 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.893 00:21:11.893 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.893 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.893 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.151 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.151 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.151 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.151 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.151 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.151 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.151 { 00:21:12.151 "cntlid": 129, 00:21:12.151 "qid": 0, 00:21:12.151 "state": "enabled", 00:21:12.151 "thread": "nvmf_tgt_poll_group_000", 00:21:12.151 "listen_address": { 00:21:12.151 "trtype": "TCP", 00:21:12.151 "adrfam": "IPv4", 00:21:12.151 "traddr": "10.0.0.2", 00:21:12.151 "trsvcid": "4420" 00:21:12.151 }, 00:21:12.151 "peer_address": { 00:21:12.151 "trtype": "TCP", 00:21:12.151 "adrfam": "IPv4", 00:21:12.151 "traddr": "10.0.0.1", 00:21:12.151 "trsvcid": "56150" 00:21:12.151 }, 00:21:12.151 "auth": { 00:21:12.151 "state": "completed", 00:21:12.151 "digest": "sha512", 00:21:12.151 "dhgroup": "ffdhe6144" 00:21:12.151 } 00:21:12.151 } 00:21:12.151 ]' 00:21:12.151 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.409 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.409 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.409 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.409 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.409 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.409 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.409 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.667 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:21:13.600 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.600 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.600 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.600 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.601 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.601 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.601 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.601 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.859 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.424 00:21:14.424 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.424 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.424 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.682 { 00:21:14.682 "cntlid": 131, 00:21:14.682 "qid": 0, 00:21:14.682 "state": "enabled", 00:21:14.682 "thread": "nvmf_tgt_poll_group_000", 00:21:14.682 "listen_address": { 00:21:14.682 "trtype": "TCP", 00:21:14.682 "adrfam": "IPv4", 00:21:14.682 "traddr": "10.0.0.2", 00:21:14.682 "trsvcid": "4420" 00:21:14.682 }, 00:21:14.682 "peer_address": { 00:21:14.682 "trtype": "TCP", 00:21:14.682 "adrfam": "IPv4", 00:21:14.682 "traddr": "10.0.0.1", 00:21:14.682 "trsvcid": "43218" 00:21:14.682 }, 00:21:14.682 "auth": { 00:21:14.682 "state": "completed", 00:21:14.682 "digest": "sha512", 00:21:14.682 "dhgroup": "ffdhe6144" 00:21:14.682 } 00:21:14.682 } 00:21:14.682 ]' 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.682 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.683 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.941 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:21:15.872 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.872 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.872 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.872 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.872 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.872 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.872 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.872 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.438 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.439 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.005 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.005 { 00:21:17.005 "cntlid": 133, 00:21:17.005 "qid": 0, 00:21:17.005 "state": "enabled", 00:21:17.005 "thread": "nvmf_tgt_poll_group_000", 00:21:17.005 "listen_address": { 00:21:17.005 "trtype": "TCP", 00:21:17.005 "adrfam": "IPv4", 00:21:17.005 "traddr": "10.0.0.2", 00:21:17.005 "trsvcid": "4420" 00:21:17.005 }, 00:21:17.005 "peer_address": { 00:21:17.005 "trtype": "TCP", 00:21:17.005 "adrfam": "IPv4", 00:21:17.005 "traddr": "10.0.0.1", 00:21:17.005 "trsvcid": "43248" 00:21:17.005 }, 00:21:17.005 "auth": { 00:21:17.005 "state": "completed", 00:21:17.005 "digest": "sha512", 00:21:17.005 "dhgroup": "ffdhe6144" 00:21:17.005 } 00:21:17.005 } 00:21:17.005 ]' 00:21:17.005 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.263 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.263 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.263 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.263 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.263 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.263 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.263 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.520 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:21:18.486 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.486 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.486 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.486 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.486 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.486 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.486 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.487 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.744 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.311 00:21:19.311 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.311 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.311 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.569 { 00:21:19.569 "cntlid": 135, 00:21:19.569 "qid": 0, 00:21:19.569 "state": "enabled", 00:21:19.569 "thread": "nvmf_tgt_poll_group_000", 00:21:19.569 "listen_address": { 00:21:19.569 "trtype": "TCP", 00:21:19.569 "adrfam": "IPv4", 00:21:19.569 "traddr": "10.0.0.2", 00:21:19.569 "trsvcid": "4420" 00:21:19.569 }, 00:21:19.569 "peer_address": { 00:21:19.569 "trtype": "TCP", 00:21:19.569 "adrfam": "IPv4", 00:21:19.569 "traddr": "10.0.0.1", 00:21:19.569 "trsvcid": "43282" 00:21:19.569 }, 00:21:19.569 "auth": { 00:21:19.569 "state": "completed", 00:21:19.569 "digest": "sha512", 00:21:19.569 "dhgroup": "ffdhe6144" 00:21:19.569 } 00:21:19.569 } 00:21:19.569 ]' 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.569 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.827 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.767 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.026 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.964 00:21:21.964 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.964 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.964 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.222 { 00:21:22.222 "cntlid": 137, 00:21:22.222 "qid": 0, 00:21:22.222 "state": "enabled", 00:21:22.222 "thread": "nvmf_tgt_poll_group_000", 00:21:22.222 "listen_address": { 00:21:22.222 "trtype": "TCP", 00:21:22.222 "adrfam": "IPv4", 00:21:22.222 "traddr": "10.0.0.2", 00:21:22.222 "trsvcid": "4420" 00:21:22.222 }, 00:21:22.222 "peer_address": { 00:21:22.222 "trtype": "TCP", 00:21:22.222 "adrfam": "IPv4", 00:21:22.222 "traddr": "10.0.0.1", 00:21:22.222 "trsvcid": "43302" 00:21:22.222 }, 00:21:22.222 "auth": { 00:21:22.222 "state": "completed", 00:21:22.222 "digest": "sha512", 00:21:22.222 "dhgroup": "ffdhe8192" 00:21:22.222 } 00:21:22.222 } 00:21:22.222 ]' 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.222 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.481 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.481 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.481 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.481 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.481 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.738 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:21:23.675 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.675 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.675 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.675 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.675 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.675 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.675 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.675 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.933 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.867 00:21:24.868 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.868 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.868 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.125 { 00:21:25.125 "cntlid": 139, 00:21:25.125 "qid": 0, 00:21:25.125 "state": "enabled", 00:21:25.125 "thread": "nvmf_tgt_poll_group_000", 00:21:25.125 "listen_address": { 00:21:25.125 "trtype": "TCP", 00:21:25.125 "adrfam": "IPv4", 00:21:25.125 "traddr": "10.0.0.2", 00:21:25.125 "trsvcid": "4420" 00:21:25.125 }, 00:21:25.125 "peer_address": { 00:21:25.125 "trtype": "TCP", 00:21:25.125 "adrfam": "IPv4", 00:21:25.125 "traddr": "10.0.0.1", 00:21:25.125 "trsvcid": "34496" 00:21:25.125 }, 00:21:25.125 "auth": { 00:21:25.125 "state": "completed", 00:21:25.125 "digest": "sha512", 00:21:25.125 "dhgroup": "ffdhe8192" 00:21:25.125 } 00:21:25.125 } 00:21:25.125 ]' 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.125 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.383 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.383 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.383 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.383 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.383 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.641 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTUyZWI3Y2U2MzdkODkwNGJhZjJkZTIyMmFmNTRiZDIc9wu+: --dhchap-ctrl-secret DHHC-1:02:NDJmNTc3ZjUyOGZjOWNlNDZhN2MzZTkxZmY5YWE4N2Q2YmVjMGM1ZTFiZDFkODgxeXhWTA==: 00:21:26.580 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.580 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.580 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.580 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.580 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.580 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.580 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:26.580 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.838 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.777 00:21:27.777 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.777 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.777 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.035 { 00:21:28.035 "cntlid": 141, 00:21:28.035 "qid": 0, 00:21:28.035 "state": "enabled", 00:21:28.035 "thread": "nvmf_tgt_poll_group_000", 00:21:28.035 "listen_address": { 00:21:28.035 "trtype": "TCP", 00:21:28.035 "adrfam": "IPv4", 00:21:28.035 "traddr": "10.0.0.2", 00:21:28.035 "trsvcid": "4420" 00:21:28.035 }, 00:21:28.035 "peer_address": { 00:21:28.035 "trtype": "TCP", 00:21:28.035 "adrfam": "IPv4", 00:21:28.035 "traddr": "10.0.0.1", 00:21:28.035 "trsvcid": "34528" 00:21:28.035 }, 00:21:28.035 "auth": { 00:21:28.035 "state": "completed", 00:21:28.035 "digest": "sha512", 00:21:28.035 "dhgroup": "ffdhe8192" 00:21:28.035 } 00:21:28.035 } 00:21:28.035 ]' 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.035 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.295 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.295 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.295 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.295 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NjhiYzkxMTA3ZjcxYmU2ZjBkNDgxZWYyMjJiMzNiOTBiYmQ5NTgxYjExZTc3ZTY08kEGcw==: --dhchap-ctrl-secret DHHC-1:01:ZmFhOTE4ZjhhNWU4YjQ2YTI1ZTgxN2M3NGIwNTlkMDUCBrFl: 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:29.674 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.675 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.610 00:21:30.610 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.610 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.610 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.867 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.867 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.867 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.867 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.867 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.868 { 00:21:30.868 "cntlid": 143, 00:21:30.868 "qid": 0, 00:21:30.868 "state": "enabled", 00:21:30.868 "thread": "nvmf_tgt_poll_group_000", 00:21:30.868 "listen_address": { 00:21:30.868 "trtype": "TCP", 00:21:30.868 "adrfam": "IPv4", 00:21:30.868 "traddr": "10.0.0.2", 00:21:30.868 "trsvcid": "4420" 00:21:30.868 }, 00:21:30.868 "peer_address": { 00:21:30.868 "trtype": "TCP", 00:21:30.868 "adrfam": "IPv4", 00:21:30.868 "traddr": "10.0.0.1", 00:21:30.868 "trsvcid": "34550" 00:21:30.868 }, 00:21:30.868 "auth": { 00:21:30.868 "state": "completed", 00:21:30.868 "digest": "sha512", 00:21:30.868 "dhgroup": "ffdhe8192" 00:21:30.868 } 00:21:30.868 } 00:21:30.868 ]' 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.868 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.436 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.374 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.311 00:21:33.311 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.311 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.311 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.569 { 00:21:33.569 "cntlid": 145, 00:21:33.569 "qid": 0, 00:21:33.569 "state": "enabled", 00:21:33.569 "thread": "nvmf_tgt_poll_group_000", 00:21:33.569 "listen_address": { 00:21:33.569 "trtype": "TCP", 00:21:33.569 "adrfam": "IPv4", 00:21:33.569 "traddr": "10.0.0.2", 00:21:33.569 "trsvcid": "4420" 00:21:33.569 }, 00:21:33.569 "peer_address": { 00:21:33.569 "trtype": "TCP", 00:21:33.569 "adrfam": "IPv4", 00:21:33.569 "traddr": "10.0.0.1", 00:21:33.569 "trsvcid": "34586" 00:21:33.569 }, 00:21:33.569 "auth": { 00:21:33.569 "state": "completed", 00:21:33.569 "digest": "sha512", 00:21:33.569 "dhgroup": "ffdhe8192" 00:21:33.569 } 00:21:33.569 } 00:21:33.569 ]' 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.569 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.827 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.827 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.827 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.086 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDMxODc2M2I0NTE4OWY2ZWRjMTc0MjlmNDY2OGVhOGRhMjU1YTc2NTNkZTU1NjcxPCKJ4w==: --dhchap-ctrl-secret DHHC-1:03:OWViMDQyOWM1YWNiNTEyMjllZTI3MTg0ZTFmNDdhZGQ2YWFjOGYyODZjNjc0OWIwYmEyODcwNzJlODkxZTVkNF6lMFo=: 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:35.021 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:35.955 request: 00:21:35.955 { 00:21:35.955 "name": "nvme0", 00:21:35.955 "trtype": "tcp", 00:21:35.955 "traddr": "10.0.0.2", 00:21:35.955 "adrfam": "ipv4", 00:21:35.955 "trsvcid": "4420", 00:21:35.955 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.955 "prchk_reftag": false, 00:21:35.955 "prchk_guard": false, 00:21:35.955 "hdgst": false, 00:21:35.955 "ddgst": false, 00:21:35.955 "dhchap_key": "key2", 00:21:35.955 "method": "bdev_nvme_attach_controller", 00:21:35.955 "req_id": 1 00:21:35.955 } 00:21:35.955 Got JSON-RPC error response 00:21:35.955 response: 00:21:35.955 { 00:21:35.955 "code": -5, 00:21:35.955 "message": "Input/output error" 00:21:35.955 } 00:21:35.955 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:35.955 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.955 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.955 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.955 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:35.956 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.892 request: 00:21:36.892 { 00:21:36.892 "name": "nvme0", 00:21:36.892 "trtype": "tcp", 00:21:36.892 "traddr": "10.0.0.2", 00:21:36.892 "adrfam": "ipv4", 00:21:36.892 "trsvcid": "4420", 00:21:36.892 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.892 "prchk_reftag": false, 00:21:36.892 "prchk_guard": false, 00:21:36.892 "hdgst": false, 00:21:36.892 "ddgst": false, 00:21:36.892 "dhchap_key": "key1", 00:21:36.892 "dhchap_ctrlr_key": "ckey2", 00:21:36.892 "method": "bdev_nvme_attach_controller", 00:21:36.892 "req_id": 1 00:21:36.892 } 00:21:36.892 Got JSON-RPC error response 00:21:36.892 response: 00:21:36.892 { 00:21:36.892 "code": -5, 00:21:36.892 "message": "Input/output error" 00:21:36.892 } 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.892 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.458 request: 00:21:37.458 { 00:21:37.458 "name": "nvme0", 00:21:37.458 "trtype": "tcp", 00:21:37.458 "traddr": "10.0.0.2", 00:21:37.458 "adrfam": "ipv4", 00:21:37.458 "trsvcid": "4420", 00:21:37.458 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:37.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.458 "prchk_reftag": false, 00:21:37.458 "prchk_guard": false, 00:21:37.458 "hdgst": false, 00:21:37.458 "ddgst": false, 00:21:37.458 "dhchap_key": "key1", 00:21:37.458 "dhchap_ctrlr_key": "ckey1", 00:21:37.458 "method": "bdev_nvme_attach_controller", 00:21:37.458 "req_id": 1 00:21:37.458 } 00:21:37.458 Got JSON-RPC error response 00:21:37.458 response: 00:21:37.458 { 00:21:37.459 "code": -5, 00:21:37.459 "message": "Input/output error" 00:21:37.459 } 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1743336 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1743336 ']' 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1743336 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743336 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743336' 00:21:37.459 killing process with pid 1743336 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1743336 00:21:37.459 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1743336 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1765828 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1765828 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1765828 ']' 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.716 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.974 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.974 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:37.974 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:37.974 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.974 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.974 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.975 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:37.975 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1765828 00:21:37.975 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1765828 ']' 00:21:37.975 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.975 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.975 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.975 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.975 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.232 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.232 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:38.232 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:38.232 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.232 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.490 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.490 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:38.490 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.490 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.490 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:38.490 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:38.490 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.490 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:38.491 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.491 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.491 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.491 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.491 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.428 00:21:39.428 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.428 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.428 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.686 { 00:21:39.686 "cntlid": 1, 00:21:39.686 "qid": 0, 00:21:39.686 "state": "enabled", 00:21:39.686 "thread": "nvmf_tgt_poll_group_000", 00:21:39.686 "listen_address": { 00:21:39.686 "trtype": "TCP", 00:21:39.686 "adrfam": "IPv4", 00:21:39.686 "traddr": "10.0.0.2", 00:21:39.686 "trsvcid": "4420" 00:21:39.686 }, 00:21:39.686 "peer_address": { 00:21:39.686 "trtype": "TCP", 00:21:39.686 "adrfam": "IPv4", 00:21:39.686 "traddr": "10.0.0.1", 00:21:39.686 "trsvcid": "54694" 00:21:39.686 }, 00:21:39.686 "auth": { 00:21:39.686 "state": "completed", 00:21:39.686 "digest": "sha512", 00:21:39.686 "dhgroup": "ffdhe8192" 00:21:39.686 } 00:21:39.686 } 00:21:39.686 ]' 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.686 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.943 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODU4MWVmZTkzNTRhNGJkMjRjZThkZDA5MjkxMjY5YTBjNzlhYzFkOGJmOTdlYWVkYmNmZWJjNTlhY2RiM2NkYwvCAz4=: 00:21:40.876 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.876 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.876 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.877 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.877 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.877 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:40.877 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.877 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.877 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.877 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:40.877 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.135 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.393 request: 00:21:41.393 { 00:21:41.393 "name": "nvme0", 00:21:41.393 "trtype": "tcp", 00:21:41.393 "traddr": "10.0.0.2", 00:21:41.393 "adrfam": "ipv4", 00:21:41.393 "trsvcid": "4420", 00:21:41.393 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.393 "prchk_reftag": false, 00:21:41.393 "prchk_guard": false, 00:21:41.393 "hdgst": false, 00:21:41.393 "ddgst": false, 00:21:41.393 "dhchap_key": "key3", 00:21:41.393 "method": "bdev_nvme_attach_controller", 00:21:41.393 "req_id": 1 00:21:41.393 } 00:21:41.393 Got JSON-RPC error response 00:21:41.393 response: 00:21:41.393 { 00:21:41.393 "code": -5, 00:21:41.393 "message": "Input/output error" 00:21:41.393 } 00:21:41.393 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:41.393 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.393 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.393 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.393 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:41.393 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:41.393 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:41.393 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.651 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.910 request: 00:21:41.910 { 00:21:41.910 "name": "nvme0", 00:21:41.910 "trtype": "tcp", 00:21:41.910 "traddr": "10.0.0.2", 00:21:41.910 "adrfam": "ipv4", 00:21:41.910 "trsvcid": "4420", 00:21:41.910 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.910 "prchk_reftag": false, 00:21:41.910 "prchk_guard": false, 00:21:41.910 "hdgst": false, 00:21:41.910 "ddgst": false, 00:21:41.910 "dhchap_key": "key3", 00:21:41.910 "method": "bdev_nvme_attach_controller", 00:21:41.910 "req_id": 1 00:21:41.910 } 00:21:41.910 Got JSON-RPC error response 00:21:41.910 response: 00:21:41.910 { 00:21:41.910 "code": -5, 00:21:41.910 "message": "Input/output error" 00:21:41.910 } 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.910 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.169 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.429 request: 00:21:42.429 { 00:21:42.429 "name": "nvme0", 00:21:42.429 "trtype": "tcp", 00:21:42.429 "traddr": "10.0.0.2", 00:21:42.429 "adrfam": "ipv4", 00:21:42.429 "trsvcid": "4420", 00:21:42.429 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:42.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.429 "prchk_reftag": false, 00:21:42.429 "prchk_guard": false, 00:21:42.429 "hdgst": false, 00:21:42.429 "ddgst": false, 00:21:42.429 "dhchap_key": "key0", 00:21:42.429 "dhchap_ctrlr_key": "key1", 00:21:42.429 "method": "bdev_nvme_attach_controller", 00:21:42.429 "req_id": 1 00:21:42.429 } 00:21:42.429 Got JSON-RPC error response 00:21:42.429 response: 00:21:42.429 { 00:21:42.429 "code": -5, 00:21:42.429 "message": "Input/output error" 00:21:42.429 } 00:21:42.429 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:42.429 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:42.429 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:42.429 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:42.429 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:42.429 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:42.687 00:21:42.687 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:42.687 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:42.687 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.946 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.946 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.946 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1743359 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1743359 ']' 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1743359 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743359 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743359' 00:21:43.204 killing process with pid 1743359 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1743359 00:21:43.204 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1743359 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.772 rmmod nvme_tcp 00:21:43.772 rmmod nvme_fabrics 00:21:43.772 rmmod nvme_keyring 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1765828 ']' 00:21:43.772 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1765828 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1765828 ']' 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1765828 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1765828 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1765828' 00:21:43.773 killing process with pid 1765828 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1765828 00:21:43.773 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1765828 00:21:44.033 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.033 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.033 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.033 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.033 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.033 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.033 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.033 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kqV /tmp/spdk.key-sha256.FGn /tmp/spdk.key-sha384.KKt /tmp/spdk.key-sha512.C4u /tmp/spdk.key-sha512.DIG /tmp/spdk.key-sha384.zWp /tmp/spdk.key-sha256.8pY '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:45.943 00:21:45.943 real 3m8.421s 00:21:45.943 user 7m18.210s 00:21:45.943 sys 0m24.824s 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.943 ************************************ 00:21:45.943 END TEST nvmf_auth_target 00:21:45.943 ************************************ 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.943 ************************************ 00:21:45.943 START TEST nvmf_bdevio_no_huge 00:21:45.943 ************************************ 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:45.943 * Looking for test storage... 00:21:45.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.943 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.206 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:48.111 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:48.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:48.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:48.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:48.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:48.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:21:48.112 00:21:48.112 --- 10.0.0.2 ping statistics --- 00:21:48.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.112 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:21:48.112 00:21:48.112 --- 10.0.0.1 ping statistics --- 00:21:48.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.112 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1768475 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1768475 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1768475 ']' 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.112 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.112 [2024-07-23 06:17:41.285884] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:48.112 [2024-07-23 06:17:41.285987] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:48.112 [2024-07-23 06:17:41.337638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:48.112 [2024-07-23 06:17:41.358463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.112 [2024-07-23 06:17:41.450176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.112 [2024-07-23 06:17:41.450234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.112 [2024-07-23 06:17:41.450260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.113 [2024-07-23 06:17:41.450274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.113 [2024-07-23 06:17:41.450286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.113 [2024-07-23 06:17:41.450541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:48.113 [2024-07-23 06:17:41.450646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:48.113 [2024-07-23 06:17:41.450700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:48.113 [2024-07-23 06:17:41.450703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.371 [2024-07-23 06:17:41.578094] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.371 Malloc0 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.371 [2024-07-23 06:17:41.616482] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.371 { 00:21:48.371 "params": { 00:21:48.371 "name": "Nvme$subsystem", 00:21:48.371 "trtype": "$TEST_TRANSPORT", 00:21:48.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.371 "adrfam": "ipv4", 00:21:48.371 "trsvcid": "$NVMF_PORT", 00:21:48.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.371 "hdgst": ${hdgst:-false}, 00:21:48.371 "ddgst": ${ddgst:-false} 00:21:48.371 }, 00:21:48.371 "method": "bdev_nvme_attach_controller" 00:21:48.371 } 00:21:48.371 EOF 00:21:48.371 )") 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:48.371 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:48.371 "params": { 00:21:48.371 "name": "Nvme1", 00:21:48.371 "trtype": "tcp", 00:21:48.371 "traddr": "10.0.0.2", 00:21:48.371 "adrfam": "ipv4", 00:21:48.371 "trsvcid": "4420", 00:21:48.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.371 "hdgst": false, 00:21:48.371 "ddgst": false 00:21:48.371 }, 00:21:48.371 "method": "bdev_nvme_attach_controller" 00:21:48.371 }' 00:21:48.371 [2024-07-23 06:17:41.665684] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:48.371 [2024-07-23 06:17:41.665774] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1768616 ] 00:21:48.371 [2024-07-23 06:17:41.709802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:48.629 [2024-07-23 06:17:41.730722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:48.629 [2024-07-23 06:17:41.813441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.629 [2024-07-23 06:17:41.813489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.629 [2024-07-23 06:17:41.813492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.887 I/O targets: 00:21:48.887 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:48.887 00:21:48.887 00:21:48.887 CUnit - A unit testing framework for C - Version 2.1-3 00:21:48.887 http://cunit.sourceforge.net/ 00:21:48.887 00:21:48.887 00:21:48.887 Suite: bdevio tests on: Nvme1n1 00:21:48.887 Test: blockdev write read block ...passed 00:21:49.146 Test: blockdev write zeroes read block ...passed 00:21:49.146 Test: blockdev write zeroes read no split ...passed 00:21:49.146 Test: blockdev write zeroes read split ...passed 00:21:49.146 Test: blockdev write zeroes read split partial ...passed 00:21:49.146 Test: blockdev reset ...[2024-07-23 06:17:42.341521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.146 [2024-07-23 06:17:42.341643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3d330 (9): Bad file descriptor 00:21:49.405 [2024-07-23 06:17:42.491394] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.405 passed 00:21:49.405 Test: blockdev write read 8 blocks ...passed 00:21:49.405 Test: blockdev write read size > 128k ...passed 00:21:49.405 Test: blockdev write read invalid size ...passed 00:21:49.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:49.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:49.405 Test: blockdev write read max offset ...passed 00:21:49.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:49.405 Test: blockdev writev readv 8 blocks ...passed 00:21:49.405 Test: blockdev writev readv 30 x 1block ...passed 00:21:49.405 Test: blockdev writev readv block ...passed 00:21:49.405 Test: blockdev writev readv size > 128k ...passed 00:21:49.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:49.666 Test: blockdev comparev and writev ...[2024-07-23 06:17:42.751487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.666 [2024-07-23 06:17:42.751521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.751545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.666 [2024-07-23 06:17:42.751562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.751947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.666 [2024-07-23 06:17:42.751971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.751993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.666 [2024-07-23 06:17:42.752009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.752381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.666 [2024-07-23 06:17:42.752406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.752427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.666 [2024-07-23 06:17:42.752443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.752818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.666 [2024-07-23 06:17:42.752842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.752864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.666 [2024-07-23 06:17:42.752879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:49.666 passed 00:21:49.666 Test: blockdev nvme passthru rw ...passed 00:21:49.666 Test: blockdev nvme passthru vendor specific ...[2024-07-23 06:17:42.836155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.666 [2024-07-23 06:17:42.836234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.836485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.666 [2024-07-23 06:17:42.836509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.836726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.666 [2024-07-23 06:17:42.836750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:49.666 [2024-07-23 06:17:42.836958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.666 [2024-07-23 06:17:42.836982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:49.666 passed 00:21:49.666 Test: blockdev nvme admin passthru ...passed 00:21:49.666 Test: blockdev copy ...passed 00:21:49.666 00:21:49.666 Run Summary: Type Total Ran Passed Failed Inactive 00:21:49.666 suites 1 1 n/a 0 0 00:21:49.666 tests 23 23 23 0 0 00:21:49.666 asserts 152 152 152 0 n/a 00:21:49.666 00:21:49.666 Elapsed time = 1.450 seconds 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.926 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.926 rmmod nvme_tcp 00:21:49.926 rmmod nvme_fabrics 00:21:50.184 rmmod nvme_keyring 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1768475 ']' 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1768475 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1768475 ']' 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1768475 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1768475 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1768475' 00:21:50.184 killing process with pid 1768475 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1768475 00:21:50.184 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1768475 00:21:50.441 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.441 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.441 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.441 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.441 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.441 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.441 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.442 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:52.978 00:21:52.978 real 0m6.506s 00:21:52.978 user 0m12.100s 00:21:52.978 sys 0m2.394s 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.978 ************************************ 00:21:52.978 END TEST nvmf_bdevio_no_huge 00:21:52.978 ************************************ 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:52.978 ************************************ 00:21:52.978 START TEST nvmf_tls 00:21:52.978 ************************************ 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:52.978 * Looking for test storage... 00:21:52.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.978 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.979 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:54.883 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:54.883 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.883 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:54.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:54.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:54.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:21:54.884 00:21:54.884 --- 10.0.0.2 ping statistics --- 00:21:54.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.884 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:21:54.884 00:21:54.884 --- 10.0.0.1 ping statistics --- 00:21:54.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.884 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1770695 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1770695 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1770695 ']' 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.884 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.884 [2024-07-23 06:17:47.999389] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:54.884 [2024-07-23 06:17:47.999492] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.884 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.884 [2024-07-23 06:17:48.039233] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:54.884 [2024-07-23 06:17:48.073895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.884 [2024-07-23 06:17:48.166717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.884 [2024-07-23 06:17:48.166773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.884 [2024-07-23 06:17:48.166789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.884 [2024-07-23 06:17:48.166803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.884 [2024-07-23 06:17:48.166814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.884 [2024-07-23 06:17:48.166843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.884 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.884 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:54.884 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.884 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:54.884 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.143 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.143 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:55.143 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:55.143 true 00:21:55.143 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:55.143 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:55.402 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:55.402 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:55.402 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:55.661 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:55.661 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:55.919 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:55.919 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:55.919 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:56.177 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:56.177 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:56.435 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:56.435 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:56.435 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:56.435 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:56.694 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:56.694 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:56.694 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:56.952 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:56.952 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:57.210 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:57.210 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:57.210 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:57.467 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:57.467 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:57.725 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.22v00DnHfE 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.QyKE863fQG 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.22v00DnHfE 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QyKE863fQG 00:21:57.985 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:58.244 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:58.504 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.22v00DnHfE 00:21:58.504 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.22v00DnHfE 00:21:58.504 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:58.764 [2024-07-23 06:17:51.971949] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.764 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:59.022 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:59.280 [2024-07-23 06:17:52.473267] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.280 [2024-07-23 06:17:52.473497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.280 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:59.538 malloc0 00:21:59.538 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:59.803 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.22v00DnHfE 00:22:00.067 [2024-07-23 06:17:53.285821] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:00.067 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.22v00DnHfE 00:22:00.067 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.289 Initializing NVMe Controllers 00:22:12.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:12.289 Initialization complete. Launching workers. 00:22:12.289 ======================================================== 00:22:12.289 Latency(us) 00:22:12.289 Device Information : IOPS MiB/s Average min max 00:22:12.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7795.09 30.45 8212.96 1386.84 9630.38 00:22:12.289 ======================================================== 00:22:12.289 Total : 7795.09 30.45 8212.96 1386.84 9630.38 00:22:12.289 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.22v00DnHfE 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.22v00DnHfE' 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1772690 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1772690 /var/tmp/bdevperf.sock 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1772690 ']' 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.289 [2024-07-23 06:18:03.458747] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:12.289 [2024-07-23 06:18:03.458831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772690 ] 00:22:12.289 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.289 [2024-07-23 06:18:03.490515] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:12.289 [2024-07-23 06:18:03.516152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.289 [2024-07-23 06:18:03.599223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:12.289 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.22v00DnHfE 00:22:12.289 [2024-07-23 06:18:03.933182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.289 [2024-07-23 06:18:03.933305] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:12.289 TLSTESTn1 00:22:12.289 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:12.289 Running I/O for 10 seconds... 00:22:22.269 00:22:22.269 Latency(us) 00:22:22.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.269 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:22.269 Verification LBA range: start 0x0 length 0x2000 00:22:22.269 TLSTESTn1 : 10.06 1910.04 7.46 0.00 0.00 66818.85 9806.13 100197.26 00:22:22.269 =================================================================================================================== 00:22:22.269 Total : 1910.04 7.46 0.00 0.00 66818.85 9806.13 100197.26 00:22:22.269 0 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1772690 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1772690 ']' 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1772690 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1772690 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1772690' 00:22:22.269 killing process with pid 1772690 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1772690 00:22:22.269 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.269 00:22:22.269 Latency(us) 00:22:22.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.269 =================================================================================================================== 00:22:22.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.269 [2024-07-23 06:18:14.258218] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1772690 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QyKE863fQG 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QyKE863fQG 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QyKE863fQG 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QyKE863fQG' 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1774508 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1774508 /var/tmp/bdevperf.sock 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1774508 ']' 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.269 [2024-07-23 06:18:14.524170] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:22.269 [2024-07-23 06:18:14.524264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774508 ] 00:22:22.269 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.269 [2024-07-23 06:18:14.556611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.269 [2024-07-23 06:18:14.584278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.269 [2024-07-23 06:18:14.670169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:22.269 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QyKE863fQG 00:22:22.269 [2024-07-23 06:18:15.007934] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.269 [2024-07-23 06:18:15.008049] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:22.269 [2024-07-23 06:18:15.019808] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:22.269 [2024-07-23 06:18:15.020070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd28d0 (107): Transport endpoint is not connected 00:22:22.269 [2024-07-23 06:18:15.021060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd28d0 (9): Bad file descriptor 00:22:22.269 [2024-07-23 06:18:15.022058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:22.269 [2024-07-23 06:18:15.022078] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:22.269 [2024-07-23 06:18:15.022095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.269 request: 00:22:22.269 { 00:22:22.269 "name": "TLSTEST", 00:22:22.269 "trtype": "tcp", 00:22:22.269 "traddr": "10.0.0.2", 00:22:22.269 "adrfam": "ipv4", 00:22:22.269 "trsvcid": "4420", 00:22:22.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.269 "prchk_reftag": false, 00:22:22.269 "prchk_guard": false, 00:22:22.269 "hdgst": false, 00:22:22.269 "ddgst": false, 00:22:22.269 "psk": "/tmp/tmp.QyKE863fQG", 00:22:22.269 "method": "bdev_nvme_attach_controller", 00:22:22.269 "req_id": 1 00:22:22.269 } 00:22:22.269 Got JSON-RPC error response 00:22:22.269 response: 00:22:22.269 { 00:22:22.269 "code": -5, 00:22:22.269 "message": "Input/output error" 00:22:22.269 } 00:22:22.269 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1774508 00:22:22.269 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1774508 ']' 00:22:22.269 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1774508 00:22:22.269 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:22.269 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.269 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774508 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774508' 00:22:22.270 killing process with pid 1774508 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1774508 00:22:22.270 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.270 00:22:22.270 Latency(us) 00:22:22.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.270 =================================================================================================================== 00:22:22.270 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.270 [2024-07-23 06:18:15.071081] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1774508 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.22v00DnHfE 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.22v00DnHfE 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.22v00DnHfE 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.22v00DnHfE' 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1774529 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1774529 /var/tmp/bdevperf.sock 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1774529 ']' 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.270 [2024-07-23 06:18:15.332854] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:22.270 [2024-07-23 06:18:15.332948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774529 ] 00:22:22.270 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.270 [2024-07-23 06:18:15.366029] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.270 [2024-07-23 06:18:15.394718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.270 [2024-07-23 06:18:15.477671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:22.270 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.22v00DnHfE 00:22:22.529 [2024-07-23 06:18:15.811201] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.529 [2024-07-23 06:18:15.811318] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:22.529 [2024-07-23 06:18:15.821731] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:22.529 [2024-07-23 06:18:15.821771] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:22.529 [2024-07-23 06:18:15.821824] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:22.529 [2024-07-23 06:18:15.822154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12178d0 (107): Transport endpoint is not connected 00:22:22.529 [2024-07-23 06:18:15.823145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12178d0 (9): Bad file descriptor 00:22:22.529 [2024-07-23 06:18:15.824144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:22.529 [2024-07-23 06:18:15.824165] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:22.529 [2024-07-23 06:18:15.824192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.529 request: 00:22:22.529 { 00:22:22.529 "name": "TLSTEST", 00:22:22.529 "trtype": "tcp", 00:22:22.529 "traddr": "10.0.0.2", 00:22:22.529 "adrfam": "ipv4", 00:22:22.529 "trsvcid": "4420", 00:22:22.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.529 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:22.529 "prchk_reftag": false, 00:22:22.529 "prchk_guard": false, 00:22:22.529 "hdgst": false, 00:22:22.529 "ddgst": false, 00:22:22.529 "psk": "/tmp/tmp.22v00DnHfE", 00:22:22.529 "method": "bdev_nvme_attach_controller", 00:22:22.529 "req_id": 1 00:22:22.529 } 00:22:22.529 Got JSON-RPC error response 00:22:22.529 response: 00:22:22.529 { 00:22:22.529 "code": -5, 00:22:22.529 "message": "Input/output error" 00:22:22.529 } 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1774529 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1774529 ']' 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1774529 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774529 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774529' 00:22:22.529 killing process with pid 1774529 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1774529 00:22:22.529 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.529 00:22:22.529 Latency(us) 00:22:22.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.529 =================================================================================================================== 00:22:22.529 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.529 [2024-07-23 06:18:15.870867] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:22.529 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1774529 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.22v00DnHfE 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.22v00DnHfE 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.22v00DnHfE 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.22v00DnHfE' 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1774671 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1774671 /var/tmp/bdevperf.sock 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1774671 ']' 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.788 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.788 [2024-07-23 06:18:16.105016] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:22.788 [2024-07-23 06:18:16.105110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774671 ] 00:22:23.047 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.047 [2024-07-23 06:18:16.138071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:23.047 [2024-07-23 06:18:16.165071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.047 [2024-07-23 06:18:16.250285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.047 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.047 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:23.047 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.22v00DnHfE 00:22:23.306 [2024-07-23 06:18:16.586516] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.306 [2024-07-23 06:18:16.586684] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:23.306 [2024-07-23 06:18:16.593766] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:23.306 [2024-07-23 06:18:16.593800] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:23.306 [2024-07-23 06:18:16.593840] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:23.306 [2024-07-23 06:18:16.594700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ac8d0 (107): Transport endpoint is not connected 00:22:23.306 [2024-07-23 06:18:16.595661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ac8d0 (9): Bad file descriptor 00:22:23.306 [2024-07-23 06:18:16.596686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:23.306 [2024-07-23 06:18:16.596708] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:23.306 [2024-07-23 06:18:16.596725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:23.306 request: 00:22:23.306 { 00:22:23.306 "name": "TLSTEST", 00:22:23.306 "trtype": "tcp", 00:22:23.306 "traddr": "10.0.0.2", 00:22:23.306 "adrfam": "ipv4", 00:22:23.306 "trsvcid": "4420", 00:22:23.306 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:23.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.306 "prchk_reftag": false, 00:22:23.306 "prchk_guard": false, 00:22:23.306 "hdgst": false, 00:22:23.306 "ddgst": false, 00:22:23.306 "psk": "/tmp/tmp.22v00DnHfE", 00:22:23.306 "method": "bdev_nvme_attach_controller", 00:22:23.306 "req_id": 1 00:22:23.306 } 00:22:23.306 Got JSON-RPC error response 00:22:23.306 response: 00:22:23.306 { 00:22:23.306 "code": -5, 00:22:23.306 "message": "Input/output error" 00:22:23.306 } 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1774671 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1774671 ']' 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1774671 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774671 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774671' 00:22:23.306 killing process with pid 1774671 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1774671 00:22:23.306 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.306 00:22:23.306 Latency(us) 00:22:23.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.306 =================================================================================================================== 00:22:23.306 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:23.306 [2024-07-23 06:18:16.646932] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:23.306 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1774671 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1774804 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1774804 /var/tmp/bdevperf.sock 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1774804 ']' 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.564 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.565 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.565 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.565 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.565 [2024-07-23 06:18:16.903923] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:23.565 [2024-07-23 06:18:16.904018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774804 ] 00:22:23.823 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.823 [2024-07-23 06:18:16.937641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:23.823 [2024-07-23 06:18:16.965265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.823 [2024-07-23 06:18:17.054437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.823 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.823 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:23.823 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:24.389 [2024-07-23 06:18:17.446043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:24.389 [2024-07-23 06:18:17.447625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc75de0 (9): Bad file descriptor 00:22:24.389 [2024-07-23 06:18:17.448620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:24.389 [2024-07-23 06:18:17.448656] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:24.389 [2024-07-23 06:18:17.448673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.389 request: 00:22:24.389 { 00:22:24.389 "name": "TLSTEST", 00:22:24.389 "trtype": "tcp", 00:22:24.389 "traddr": "10.0.0.2", 00:22:24.389 "adrfam": "ipv4", 00:22:24.389 "trsvcid": "4420", 00:22:24.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.389 "prchk_reftag": false, 00:22:24.389 "prchk_guard": false, 00:22:24.389 "hdgst": false, 00:22:24.389 "ddgst": false, 00:22:24.389 "method": "bdev_nvme_attach_controller", 00:22:24.389 "req_id": 1 00:22:24.389 } 00:22:24.389 Got JSON-RPC error response 00:22:24.389 response: 00:22:24.389 { 00:22:24.389 "code": -5, 00:22:24.389 "message": "Input/output error" 00:22:24.389 } 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1774804 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1774804 ']' 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1774804 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774804 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774804' 00:22:24.389 killing process with pid 1774804 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1774804 00:22:24.389 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.389 00:22:24.389 Latency(us) 00:22:24.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.389 =================================================================================================================== 00:22:24.389 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1774804 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1770695 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1770695 ']' 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1770695 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.389 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1770695 00:22:24.647 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:24.647 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:24.647 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1770695' 00:22:24.647 killing process with pid 1770695 00:22:24.647 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1770695 00:22:24.647 [2024-07-23 06:18:17.735371] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:24.647 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1770695 00:22:24.905 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:24.905 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:24.905 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.905 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.905 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:24.905 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:24.905 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.905 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:24.905 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:24.905 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.xOlCtzUide 00:22:24.905 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:24.905 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.xOlCtzUide 00:22:24.905 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:24.905 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.905 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1774953 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1774953 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1774953 ']' 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.906 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.906 [2024-07-23 06:18:18.090149] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:24.906 [2024-07-23 06:18:18.090247] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.906 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.906 [2024-07-23 06:18:18.127409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:24.906 [2024-07-23 06:18:18.159581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.906 [2024-07-23 06:18:18.249898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.906 [2024-07-23 06:18:18.249969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.906 [2024-07-23 06:18:18.249986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.163 [2024-07-23 06:18:18.250000] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.163 [2024-07-23 06:18:18.250011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.163 [2024-07-23 06:18:18.250052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.xOlCtzUide 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xOlCtzUide 00:22:25.163 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.421 [2024-07-23 06:18:18.672871] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.421 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.678 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:25.937 [2024-07-23 06:18:19.262492] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.937 [2024-07-23 06:18:19.262750] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.194 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.452 malloc0 00:22:26.452 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.710 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOlCtzUide 00:22:26.968 [2024-07-23 06:18:20.087906] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOlCtzUide 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xOlCtzUide' 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1775235 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1775235 /var/tmp/bdevperf.sock 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1775235 ']' 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.968 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.968 [2024-07-23 06:18:20.150510] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:26.968 [2024-07-23 06:18:20.150620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775235 ] 00:22:26.968 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.968 [2024-07-23 06:18:20.184519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:26.968 [2024-07-23 06:18:20.212125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.968 [2024-07-23 06:18:20.299427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.282 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.282 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:27.282 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOlCtzUide 00:22:27.542 [2024-07-23 06:18:20.629354] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.542 [2024-07-23 06:18:20.629494] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.542 TLSTESTn1 00:22:27.542 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:27.542 Running I/O for 10 seconds... 00:22:39.761 00:22:39.762 Latency(us) 00:22:39.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.762 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:39.762 Verification LBA range: start 0x0 length 0x2000 00:22:39.762 TLSTESTn1 : 10.06 2033.03 7.94 0.00 0.00 62778.03 11650.84 99420.54 00:22:39.762 =================================================================================================================== 00:22:39.762 Total : 2033.03 7.94 0.00 0.00 62778.03 11650.84 99420.54 00:22:39.762 0 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1775235 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1775235 ']' 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1775235 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775235 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775235' 00:22:39.762 killing process with pid 1775235 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1775235 00:22:39.762 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.762 00:22:39.762 Latency(us) 00:22:39.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.762 =================================================================================================================== 00:22:39.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.762 [2024-07-23 06:18:30.957353] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:39.762 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1775235 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.xOlCtzUide 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOlCtzUide 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOlCtzUide 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOlCtzUide 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xOlCtzUide' 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1776436 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1776436 /var/tmp/bdevperf.sock 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1776436 ']' 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.762 [2024-07-23 06:18:31.233941] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:39.762 [2024-07-23 06:18:31.234035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776436 ] 00:22:39.762 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.762 [2024-07-23 06:18:31.265550] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:39.762 [2024-07-23 06:18:31.292585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.762 [2024-07-23 06:18:31.378055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOlCtzUide 00:22:39.762 [2024-07-23 06:18:31.763101] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.762 [2024-07-23 06:18:31.763185] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:39.762 [2024-07-23 06:18:31.763197] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.xOlCtzUide 00:22:39.762 request: 00:22:39.762 { 00:22:39.762 "name": "TLSTEST", 00:22:39.762 "trtype": "tcp", 00:22:39.762 "traddr": "10.0.0.2", 00:22:39.762 "adrfam": "ipv4", 00:22:39.762 "trsvcid": "4420", 00:22:39.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.762 "prchk_reftag": false, 00:22:39.762 "prchk_guard": false, 00:22:39.762 "hdgst": false, 00:22:39.762 "ddgst": false, 00:22:39.762 "psk": "/tmp/tmp.xOlCtzUide", 00:22:39.762 "method": "bdev_nvme_attach_controller", 00:22:39.762 "req_id": 1 00:22:39.762 } 00:22:39.762 Got JSON-RPC error response 00:22:39.762 response: 00:22:39.762 { 00:22:39.762 "code": -1, 00:22:39.762 "message": "Operation not permitted" 00:22:39.762 } 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1776436 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1776436 ']' 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1776436 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776436 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776436' 00:22:39.762 killing process with pid 1776436 00:22:39.762 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1776436 00:22:39.762 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.762 00:22:39.762 Latency(us) 00:22:39.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.763 =================================================================================================================== 00:22:39.763 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.763 06:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1776436 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1774953 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1774953 ']' 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1774953 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774953 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774953' 00:22:39.763 killing process with pid 1774953 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1774953 00:22:39.763 [2024-07-23 06:18:32.047706] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1774953 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1776577 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1776577 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1776577 ']' 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.763 [2024-07-23 06:18:32.365310] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:39.763 [2024-07-23 06:18:32.365402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.763 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.763 [2024-07-23 06:18:32.408666] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:39.763 [2024-07-23 06:18:32.435005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.763 [2024-07-23 06:18:32.520062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.763 [2024-07-23 06:18:32.520119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.763 [2024-07-23 06:18:32.520145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.763 [2024-07-23 06:18:32.520156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.763 [2024-07-23 06:18:32.520181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.763 [2024-07-23 06:18:32.520211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.xOlCtzUide 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.xOlCtzUide 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.xOlCtzUide 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xOlCtzUide 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:39.763 [2024-07-23 06:18:32.870003] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.763 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:40.023 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:40.023 [2024-07-23 06:18:33.355325] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.023 [2024-07-23 06:18:33.355600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.282 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:40.282 malloc0 00:22:40.541 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:40.541 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOlCtzUide 00:22:40.800 [2024-07-23 06:18:34.100854] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:40.800 [2024-07-23 06:18:34.100911] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:40.800 [2024-07-23 06:18:34.100948] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:40.800 request: 00:22:40.800 { 00:22:40.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.800 "host": "nqn.2016-06.io.spdk:host1", 00:22:40.800 "psk": "/tmp/tmp.xOlCtzUide", 00:22:40.800 "method": "nvmf_subsystem_add_host", 00:22:40.800 "req_id": 1 00:22:40.800 } 00:22:40.800 Got JSON-RPC error response 00:22:40.800 response: 00:22:40.800 { 00:22:40.800 "code": -32603, 00:22:40.800 "message": "Internal error" 00:22:40.800 } 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1776577 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1776577 ']' 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1776577 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.800 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776577 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776577' 00:22:41.058 killing process with pid 1776577 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1776577 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1776577 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.xOlCtzUide 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1776874 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1776874 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1776874 ']' 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.058 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.317 [2024-07-23 06:18:34.419305] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:41.317 [2024-07-23 06:18:34.419381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.317 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.317 [2024-07-23 06:18:34.458339] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:41.317 [2024-07-23 06:18:34.488590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.317 [2024-07-23 06:18:34.578241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.317 [2024-07-23 06:18:34.578304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.317 [2024-07-23 06:18:34.578330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.317 [2024-07-23 06:18:34.578345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.317 [2024-07-23 06:18:34.578357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.317 [2024-07-23 06:18:34.578392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.xOlCtzUide 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xOlCtzUide 00:22:41.581 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:41.841 [2024-07-23 06:18:34.986281] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.841 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:42.100 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:42.360 [2024-07-23 06:18:35.519749] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.360 [2024-07-23 06:18:35.520011] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.360 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:42.619 malloc0 00:22:42.619 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:42.877 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOlCtzUide 00:22:43.136 [2024-07-23 06:18:36.345739] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1777152 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1777152 /var/tmp/bdevperf.sock 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1777152 ']' 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.136 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.136 [2024-07-23 06:18:36.404576] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:43.136 [2024-07-23 06:18:36.404671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777152 ] 00:22:43.136 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.136 [2024-07-23 06:18:36.441065] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:43.136 [2024-07-23 06:18:36.466494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.393 [2024-07-23 06:18:36.551750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.393 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.393 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:43.393 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOlCtzUide 00:22:43.651 [2024-07-23 06:18:36.882383] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.651 [2024-07-23 06:18:36.882494] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:43.652 TLSTESTn1 00:22:43.652 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:44.221 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:44.221 "subsystems": [ 00:22:44.221 { 00:22:44.221 "subsystem": "keyring", 00:22:44.222 "config": [] 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "subsystem": "iobuf", 00:22:44.222 "config": [ 00:22:44.222 { 00:22:44.222 "method": "iobuf_set_options", 00:22:44.222 "params": { 00:22:44.222 "small_pool_count": 8192, 00:22:44.222 "large_pool_count": 1024, 00:22:44.222 "small_bufsize": 8192, 00:22:44.222 "large_bufsize": 135168 00:22:44.222 } 00:22:44.222 } 00:22:44.222 ] 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "subsystem": "sock", 00:22:44.222 "config": [ 00:22:44.222 { 00:22:44.222 "method": "sock_set_default_impl", 00:22:44.222 "params": { 00:22:44.222 "impl_name": "posix" 00:22:44.222 } 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "method": "sock_impl_set_options", 00:22:44.222 "params": { 00:22:44.222 "impl_name": "ssl", 00:22:44.222 "recv_buf_size": 4096, 00:22:44.222 "send_buf_size": 4096, 00:22:44.222 "enable_recv_pipe": true, 00:22:44.222 "enable_quickack": false, 00:22:44.222 "enable_placement_id": 0, 00:22:44.222 "enable_zerocopy_send_server": true, 00:22:44.222 "enable_zerocopy_send_client": false, 00:22:44.222 "zerocopy_threshold": 0, 00:22:44.222 "tls_version": 0, 00:22:44.222 "enable_ktls": false 00:22:44.222 } 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "method": "sock_impl_set_options", 00:22:44.222 "params": { 00:22:44.222 "impl_name": "posix", 00:22:44.222 "recv_buf_size": 2097152, 00:22:44.222 "send_buf_size": 2097152, 00:22:44.222 "enable_recv_pipe": true, 00:22:44.222 "enable_quickack": false, 00:22:44.222 "enable_placement_id": 0, 00:22:44.222 "enable_zerocopy_send_server": true, 00:22:44.222 "enable_zerocopy_send_client": false, 00:22:44.222 "zerocopy_threshold": 0, 00:22:44.222 "tls_version": 0, 00:22:44.222 "enable_ktls": false 00:22:44.222 } 00:22:44.222 } 00:22:44.222 ] 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "subsystem": "vmd", 00:22:44.222 "config": [] 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "subsystem": "accel", 00:22:44.222 "config": [ 00:22:44.222 { 00:22:44.222 "method": "accel_set_options", 00:22:44.222 "params": { 00:22:44.222 "small_cache_size": 128, 00:22:44.222 "large_cache_size": 16, 00:22:44.222 "task_count": 2048, 00:22:44.222 "sequence_count": 2048, 00:22:44.222 "buf_count": 2048 00:22:44.222 } 00:22:44.222 } 00:22:44.222 ] 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "subsystem": "bdev", 00:22:44.222 "config": [ 00:22:44.222 { 00:22:44.222 "method": "bdev_set_options", 00:22:44.222 "params": { 00:22:44.222 "bdev_io_pool_size": 65535, 00:22:44.222 "bdev_io_cache_size": 256, 00:22:44.222 "bdev_auto_examine": true, 00:22:44.222 "iobuf_small_cache_size": 128, 00:22:44.222 "iobuf_large_cache_size": 16 00:22:44.222 } 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "method": "bdev_raid_set_options", 00:22:44.222 "params": { 00:22:44.222 "process_window_size_kb": 1024, 00:22:44.222 "process_max_bandwidth_mb_sec": 0 00:22:44.222 } 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "method": "bdev_iscsi_set_options", 00:22:44.222 "params": { 00:22:44.222 "timeout_sec": 30 00:22:44.222 } 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "method": "bdev_nvme_set_options", 00:22:44.222 "params": { 00:22:44.222 "action_on_timeout": "none", 00:22:44.222 "timeout_us": 0, 00:22:44.222 "timeout_admin_us": 0, 00:22:44.222 "keep_alive_timeout_ms": 10000, 00:22:44.222 "arbitration_burst": 0, 00:22:44.222 "low_priority_weight": 0, 00:22:44.222 "medium_priority_weight": 0, 00:22:44.222 "high_priority_weight": 0, 00:22:44.222 "nvme_adminq_poll_period_us": 10000, 00:22:44.222 "nvme_ioq_poll_period_us": 0, 00:22:44.222 "io_queue_requests": 0, 00:22:44.222 "delay_cmd_submit": true, 00:22:44.222 "transport_retry_count": 4, 00:22:44.222 "bdev_retry_count": 3, 00:22:44.222 "transport_ack_timeout": 0, 00:22:44.222 "ctrlr_loss_timeout_sec": 0, 00:22:44.222 "reconnect_delay_sec": 0, 00:22:44.222 "fast_io_fail_timeout_sec": 0, 00:22:44.222 "disable_auto_failback": false, 00:22:44.222 "generate_uuids": false, 00:22:44.222 "transport_tos": 0, 00:22:44.222 "nvme_error_stat": false, 00:22:44.222 "rdma_srq_size": 0, 00:22:44.222 "io_path_stat": false, 00:22:44.222 "allow_accel_sequence": false, 00:22:44.222 "rdma_max_cq_size": 0, 00:22:44.222 "rdma_cm_event_timeout_ms": 0, 00:22:44.222 "dhchap_digests": [ 00:22:44.222 "sha256", 00:22:44.222 "sha384", 00:22:44.222 "sha512" 00:22:44.222 ], 00:22:44.222 "dhchap_dhgroups": [ 00:22:44.222 "null", 00:22:44.222 "ffdhe2048", 00:22:44.222 "ffdhe3072", 00:22:44.222 "ffdhe4096", 00:22:44.222 "ffdhe6144", 00:22:44.222 "ffdhe8192" 00:22:44.222 ] 00:22:44.222 } 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "method": "bdev_nvme_set_hotplug", 00:22:44.222 "params": { 00:22:44.222 "period_us": 100000, 00:22:44.222 "enable": false 00:22:44.222 } 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "method": "bdev_malloc_create", 00:22:44.222 "params": { 00:22:44.222 "name": "malloc0", 00:22:44.222 "num_blocks": 8192, 00:22:44.222 "block_size": 4096, 00:22:44.222 "physical_block_size": 4096, 00:22:44.222 "uuid": "539de479-68ed-4193-a66a-c746905a6df2", 00:22:44.222 "optimal_io_boundary": 0, 00:22:44.222 "md_size": 0, 00:22:44.222 "dif_type": 0, 00:22:44.222 "dif_is_head_of_md": false, 00:22:44.222 "dif_pi_format": 0 00:22:44.222 } 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "method": "bdev_wait_for_examine" 00:22:44.222 } 00:22:44.222 ] 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "subsystem": "nbd", 00:22:44.222 "config": [] 00:22:44.222 }, 00:22:44.222 { 00:22:44.223 "subsystem": "scheduler", 00:22:44.223 "config": [ 00:22:44.223 { 00:22:44.223 "method": "framework_set_scheduler", 00:22:44.223 "params": { 00:22:44.223 "name": "static" 00:22:44.223 } 00:22:44.223 } 00:22:44.223 ] 00:22:44.223 }, 00:22:44.223 { 00:22:44.223 "subsystem": "nvmf", 00:22:44.223 "config": [ 00:22:44.223 { 00:22:44.223 "method": "nvmf_set_config", 00:22:44.223 "params": { 00:22:44.223 "discovery_filter": "match_any", 00:22:44.223 "admin_cmd_passthru": { 00:22:44.223 "identify_ctrlr": false 00:22:44.223 } 00:22:44.223 } 00:22:44.223 }, 00:22:44.223 { 00:22:44.223 "method": "nvmf_set_max_subsystems", 00:22:44.223 "params": { 00:22:44.223 "max_subsystems": 1024 00:22:44.223 } 00:22:44.223 }, 00:22:44.223 { 00:22:44.223 "method": "nvmf_set_crdt", 00:22:44.223 "params": { 00:22:44.223 "crdt1": 0, 00:22:44.223 "crdt2": 0, 00:22:44.223 "crdt3": 0 00:22:44.223 } 00:22:44.223 }, 00:22:44.223 { 00:22:44.223 "method": "nvmf_create_transport", 00:22:44.223 "params": { 00:22:44.223 "trtype": "TCP", 00:22:44.223 "max_queue_depth": 128, 00:22:44.223 "max_io_qpairs_per_ctrlr": 127, 00:22:44.223 "in_capsule_data_size": 4096, 00:22:44.223 "max_io_size": 131072, 00:22:44.223 "io_unit_size": 131072, 00:22:44.223 "max_aq_depth": 128, 00:22:44.223 "num_shared_buffers": 511, 00:22:44.223 "buf_cache_size": 4294967295, 00:22:44.223 "dif_insert_or_strip": false, 00:22:44.223 "zcopy": false, 00:22:44.223 "c2h_success": false, 00:22:44.223 "sock_priority": 0, 00:22:44.223 "abort_timeout_sec": 1, 00:22:44.223 "ack_timeout": 0, 00:22:44.223 "data_wr_pool_size": 0 00:22:44.223 } 00:22:44.223 }, 00:22:44.223 { 00:22:44.223 "method": "nvmf_create_subsystem", 00:22:44.223 "params": { 00:22:44.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.223 "allow_any_host": false, 00:22:44.223 "serial_number": "SPDK00000000000001", 00:22:44.223 "model_number": "SPDK bdev Controller", 00:22:44.223 "max_namespaces": 10, 00:22:44.223 "min_cntlid": 1, 00:22:44.223 "max_cntlid": 65519, 00:22:44.223 "ana_reporting": false 00:22:44.223 } 00:22:44.223 }, 00:22:44.223 { 00:22:44.223 "method": "nvmf_subsystem_add_host", 00:22:44.223 "params": { 00:22:44.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.223 "host": "nqn.2016-06.io.spdk:host1", 00:22:44.223 "psk": "/tmp/tmp.xOlCtzUide" 00:22:44.223 } 00:22:44.223 }, 00:22:44.223 { 00:22:44.223 "method": "nvmf_subsystem_add_ns", 00:22:44.223 "params": { 00:22:44.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.223 "namespace": { 00:22:44.223 "nsid": 1, 00:22:44.223 "bdev_name": "malloc0", 00:22:44.223 "nguid": "539DE47968ED4193A66AC746905A6DF2", 00:22:44.223 "uuid": "539de479-68ed-4193-a66a-c746905a6df2", 00:22:44.223 "no_auto_visible": false 00:22:44.223 } 00:22:44.223 } 00:22:44.223 }, 00:22:44.223 { 00:22:44.223 "method": "nvmf_subsystem_add_listener", 00:22:44.223 "params": { 00:22:44.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.223 "listen_address": { 00:22:44.223 "trtype": "TCP", 00:22:44.223 "adrfam": "IPv4", 00:22:44.223 "traddr": "10.0.0.2", 00:22:44.223 "trsvcid": "4420" 00:22:44.223 }, 00:22:44.223 "secure_channel": true 00:22:44.223 } 00:22:44.223 } 00:22:44.223 ] 00:22:44.223 } 00:22:44.223 ] 00:22:44.223 }' 00:22:44.223 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:44.483 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:44.483 "subsystems": [ 00:22:44.483 { 00:22:44.483 "subsystem": "keyring", 00:22:44.483 "config": [] 00:22:44.483 }, 00:22:44.483 { 00:22:44.483 "subsystem": "iobuf", 00:22:44.483 "config": [ 00:22:44.483 { 00:22:44.483 "method": "iobuf_set_options", 00:22:44.483 "params": { 00:22:44.483 "small_pool_count": 8192, 00:22:44.483 "large_pool_count": 1024, 00:22:44.483 "small_bufsize": 8192, 00:22:44.483 "large_bufsize": 135168 00:22:44.483 } 00:22:44.483 } 00:22:44.483 ] 00:22:44.483 }, 00:22:44.483 { 00:22:44.483 "subsystem": "sock", 00:22:44.483 "config": [ 00:22:44.483 { 00:22:44.483 "method": "sock_set_default_impl", 00:22:44.483 "params": { 00:22:44.483 "impl_name": "posix" 00:22:44.483 } 00:22:44.483 }, 00:22:44.483 { 00:22:44.483 "method": "sock_impl_set_options", 00:22:44.483 "params": { 00:22:44.483 "impl_name": "ssl", 00:22:44.483 "recv_buf_size": 4096, 00:22:44.483 "send_buf_size": 4096, 00:22:44.483 "enable_recv_pipe": true, 00:22:44.483 "enable_quickack": false, 00:22:44.483 "enable_placement_id": 0, 00:22:44.483 "enable_zerocopy_send_server": true, 00:22:44.483 "enable_zerocopy_send_client": false, 00:22:44.483 "zerocopy_threshold": 0, 00:22:44.483 "tls_version": 0, 00:22:44.483 "enable_ktls": false 00:22:44.483 } 00:22:44.483 }, 00:22:44.483 { 00:22:44.483 "method": "sock_impl_set_options", 00:22:44.483 "params": { 00:22:44.483 "impl_name": "posix", 00:22:44.483 "recv_buf_size": 2097152, 00:22:44.483 "send_buf_size": 2097152, 00:22:44.483 "enable_recv_pipe": true, 00:22:44.483 "enable_quickack": false, 00:22:44.483 "enable_placement_id": 0, 00:22:44.483 "enable_zerocopy_send_server": true, 00:22:44.483 "enable_zerocopy_send_client": false, 00:22:44.483 "zerocopy_threshold": 0, 00:22:44.483 "tls_version": 0, 00:22:44.483 "enable_ktls": false 00:22:44.483 } 00:22:44.483 } 00:22:44.483 ] 00:22:44.483 }, 00:22:44.483 { 00:22:44.483 "subsystem": "vmd", 00:22:44.483 "config": [] 00:22:44.483 }, 00:22:44.483 { 00:22:44.483 "subsystem": "accel", 00:22:44.483 "config": [ 00:22:44.483 { 00:22:44.483 "method": "accel_set_options", 00:22:44.483 "params": { 00:22:44.483 "small_cache_size": 128, 00:22:44.483 "large_cache_size": 16, 00:22:44.483 "task_count": 2048, 00:22:44.483 "sequence_count": 2048, 00:22:44.483 "buf_count": 2048 00:22:44.483 } 00:22:44.483 } 00:22:44.483 ] 00:22:44.483 }, 00:22:44.483 { 00:22:44.483 "subsystem": "bdev", 00:22:44.483 "config": [ 00:22:44.483 { 00:22:44.483 "method": "bdev_set_options", 00:22:44.483 "params": { 00:22:44.483 "bdev_io_pool_size": 65535, 00:22:44.483 "bdev_io_cache_size": 256, 00:22:44.483 "bdev_auto_examine": true, 00:22:44.483 "iobuf_small_cache_size": 128, 00:22:44.484 "iobuf_large_cache_size": 16 00:22:44.484 } 00:22:44.484 }, 00:22:44.484 { 00:22:44.484 "method": "bdev_raid_set_options", 00:22:44.484 "params": { 00:22:44.484 "process_window_size_kb": 1024, 00:22:44.484 "process_max_bandwidth_mb_sec": 0 00:22:44.484 } 00:22:44.484 }, 00:22:44.484 { 00:22:44.484 "method": "bdev_iscsi_set_options", 00:22:44.484 "params": { 00:22:44.484 "timeout_sec": 30 00:22:44.484 } 00:22:44.484 }, 00:22:44.484 { 00:22:44.484 "method": "bdev_nvme_set_options", 00:22:44.484 "params": { 00:22:44.484 "action_on_timeout": "none", 00:22:44.484 "timeout_us": 0, 00:22:44.484 "timeout_admin_us": 0, 00:22:44.484 "keep_alive_timeout_ms": 10000, 00:22:44.484 "arbitration_burst": 0, 00:22:44.484 "low_priority_weight": 0, 00:22:44.484 "medium_priority_weight": 0, 00:22:44.484 "high_priority_weight": 0, 00:22:44.484 "nvme_adminq_poll_period_us": 10000, 00:22:44.484 "nvme_ioq_poll_period_us": 0, 00:22:44.484 "io_queue_requests": 512, 00:22:44.484 "delay_cmd_submit": true, 00:22:44.484 "transport_retry_count": 4, 00:22:44.484 "bdev_retry_count": 3, 00:22:44.484 "transport_ack_timeout": 0, 00:22:44.484 "ctrlr_loss_timeout_sec": 0, 00:22:44.484 "reconnect_delay_sec": 0, 00:22:44.484 "fast_io_fail_timeout_sec": 0, 00:22:44.484 "disable_auto_failback": false, 00:22:44.484 "generate_uuids": false, 00:22:44.484 "transport_tos": 0, 00:22:44.484 "nvme_error_stat": false, 00:22:44.484 "rdma_srq_size": 0, 00:22:44.484 "io_path_stat": false, 00:22:44.484 "allow_accel_sequence": false, 00:22:44.484 "rdma_max_cq_size": 0, 00:22:44.484 "rdma_cm_event_timeout_ms": 0, 00:22:44.484 "dhchap_digests": [ 00:22:44.484 "sha256", 00:22:44.484 "sha384", 00:22:44.484 "sha512" 00:22:44.484 ], 00:22:44.484 "dhchap_dhgroups": [ 00:22:44.484 "null", 00:22:44.484 "ffdhe2048", 00:22:44.484 "ffdhe3072", 00:22:44.484 "ffdhe4096", 00:22:44.484 "ffdhe6144", 00:22:44.484 "ffdhe8192" 00:22:44.484 ] 00:22:44.484 } 00:22:44.484 }, 00:22:44.484 { 00:22:44.484 "method": "bdev_nvme_attach_controller", 00:22:44.484 "params": { 00:22:44.484 "name": "TLSTEST", 00:22:44.484 "trtype": "TCP", 00:22:44.484 "adrfam": "IPv4", 00:22:44.484 "traddr": "10.0.0.2", 00:22:44.484 "trsvcid": "4420", 00:22:44.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.484 "prchk_reftag": false, 00:22:44.484 "prchk_guard": false, 00:22:44.484 "ctrlr_loss_timeout_sec": 0, 00:22:44.484 "reconnect_delay_sec": 0, 00:22:44.484 "fast_io_fail_timeout_sec": 0, 00:22:44.484 "psk": "/tmp/tmp.xOlCtzUide", 00:22:44.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.484 "hdgst": false, 00:22:44.484 "ddgst": false 00:22:44.484 } 00:22:44.484 }, 00:22:44.484 { 00:22:44.484 "method": "bdev_nvme_set_hotplug", 00:22:44.484 "params": { 00:22:44.484 "period_us": 100000, 00:22:44.484 "enable": false 00:22:44.484 } 00:22:44.484 }, 00:22:44.484 { 00:22:44.484 "method": "bdev_wait_for_examine" 00:22:44.484 } 00:22:44.484 ] 00:22:44.484 }, 00:22:44.484 { 00:22:44.484 "subsystem": "nbd", 00:22:44.484 "config": [] 00:22:44.484 } 00:22:44.484 ] 00:22:44.484 }' 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1777152 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1777152 ']' 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1777152 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1777152 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1777152' 00:22:44.484 killing process with pid 1777152 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1777152 00:22:44.484 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.484 00:22:44.484 Latency(us) 00:22:44.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.484 =================================================================================================================== 00:22:44.484 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:44.484 [2024-07-23 06:18:37.681680] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:44.484 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1777152 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1776874 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1776874 ']' 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1776874 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776874 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776874' 00:22:44.744 killing process with pid 1776874 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1776874 00:22:44.744 [2024-07-23 06:18:37.930766] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:44.744 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1776874 00:22:45.003 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:45.003 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.003 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.003 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:45.003 "subsystems": [ 00:22:45.003 { 00:22:45.003 "subsystem": "keyring", 00:22:45.003 "config": [] 00:22:45.003 }, 00:22:45.003 { 00:22:45.003 "subsystem": "iobuf", 00:22:45.003 "config": [ 00:22:45.003 { 00:22:45.003 "method": "iobuf_set_options", 00:22:45.003 "params": { 00:22:45.003 "small_pool_count": 8192, 00:22:45.003 "large_pool_count": 1024, 00:22:45.003 "small_bufsize": 8192, 00:22:45.003 "large_bufsize": 135168 00:22:45.003 } 00:22:45.003 } 00:22:45.003 ] 00:22:45.003 }, 00:22:45.003 { 00:22:45.003 "subsystem": "sock", 00:22:45.003 "config": [ 00:22:45.003 { 00:22:45.003 "method": "sock_set_default_impl", 00:22:45.003 "params": { 00:22:45.003 "impl_name": "posix" 00:22:45.003 } 00:22:45.003 }, 00:22:45.003 { 00:22:45.003 "method": "sock_impl_set_options", 00:22:45.003 "params": { 00:22:45.003 "impl_name": "ssl", 00:22:45.003 "recv_buf_size": 4096, 00:22:45.003 "send_buf_size": 4096, 00:22:45.003 "enable_recv_pipe": true, 00:22:45.004 "enable_quickack": false, 00:22:45.004 "enable_placement_id": 0, 00:22:45.004 "enable_zerocopy_send_server": true, 00:22:45.004 "enable_zerocopy_send_client": false, 00:22:45.004 "zerocopy_threshold": 0, 00:22:45.004 "tls_version": 0, 00:22:45.004 "enable_ktls": false 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "sock_impl_set_options", 00:22:45.004 "params": { 00:22:45.004 "impl_name": "posix", 00:22:45.004 "recv_buf_size": 2097152, 00:22:45.004 "send_buf_size": 2097152, 00:22:45.004 "enable_recv_pipe": true, 00:22:45.004 "enable_quickack": false, 00:22:45.004 "enable_placement_id": 0, 00:22:45.004 "enable_zerocopy_send_server": true, 00:22:45.004 "enable_zerocopy_send_client": false, 00:22:45.004 "zerocopy_threshold": 0, 00:22:45.004 "tls_version": 0, 00:22:45.004 "enable_ktls": false 00:22:45.004 } 00:22:45.004 } 00:22:45.004 ] 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "subsystem": "vmd", 00:22:45.004 "config": [] 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "subsystem": "accel", 00:22:45.004 "config": [ 00:22:45.004 { 00:22:45.004 "method": "accel_set_options", 00:22:45.004 "params": { 00:22:45.004 "small_cache_size": 128, 00:22:45.004 "large_cache_size": 16, 00:22:45.004 "task_count": 2048, 00:22:45.004 "sequence_count": 2048, 00:22:45.004 "buf_count": 2048 00:22:45.004 } 00:22:45.004 } 00:22:45.004 ] 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "subsystem": "bdev", 00:22:45.004 "config": [ 00:22:45.004 { 00:22:45.004 "method": "bdev_set_options", 00:22:45.004 "params": { 00:22:45.004 "bdev_io_pool_size": 65535, 00:22:45.004 "bdev_io_cache_size": 256, 00:22:45.004 "bdev_auto_examine": true, 00:22:45.004 "iobuf_small_cache_size": 128, 00:22:45.004 "iobuf_large_cache_size": 16 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "bdev_raid_set_options", 00:22:45.004 "params": { 00:22:45.004 "process_window_size_kb": 1024, 00:22:45.004 "process_max_bandwidth_mb_sec": 0 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "bdev_iscsi_set_options", 00:22:45.004 "params": { 00:22:45.004 "timeout_sec": 30 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "bdev_nvme_set_options", 00:22:45.004 "params": { 00:22:45.004 "action_on_timeout": "none", 00:22:45.004 "timeout_us": 0, 00:22:45.004 "timeout_admin_us": 0, 00:22:45.004 "keep_alive_timeout_ms": 10000, 00:22:45.004 "arbitration_burst": 0, 00:22:45.004 "low_priority_weight": 0, 00:22:45.004 "medium_priority_weight": 0, 00:22:45.004 "high_priority_weight": 0, 00:22:45.004 "nvme_adminq_poll_period_us": 10000, 00:22:45.004 "nvme_ioq_poll_period_us": 0, 00:22:45.004 "io_queue_requests": 0, 00:22:45.004 "delay_cmd_submit": true, 00:22:45.004 "transport_retry_count": 4, 00:22:45.004 "bdev_retry_count": 3, 00:22:45.004 "transport_ack_timeout": 0, 00:22:45.004 "ctrlr_loss_timeout_sec": 0, 00:22:45.004 "reconnect_delay_sec": 0, 00:22:45.004 "fast_io_fail_timeout_sec": 0, 00:22:45.004 "disable_auto_failback": false, 00:22:45.004 "generate_uuids": false, 00:22:45.004 "transport_tos": 0, 00:22:45.004 "nvme_error_stat": false, 00:22:45.004 "rdma_srq_size": 0, 00:22:45.004 "io_path_stat": false, 00:22:45.004 "allow_accel_sequence": false, 00:22:45.004 "rdma_max_cq_size": 0, 00:22:45.004 "rdma_cm_event_timeout_ms": 0, 00:22:45.004 "dhchap_digests": [ 00:22:45.004 "sha256", 00:22:45.004 "sha384", 00:22:45.004 "sha512" 00:22:45.004 ], 00:22:45.004 "dhchap_dhgroups": [ 00:22:45.004 "null", 00:22:45.004 "ffdhe2048", 00:22:45.004 "ffdhe3072", 00:22:45.004 "ffdhe4096", 00:22:45.004 "ffdhe6144", 00:22:45.004 "ffdhe8192" 00:22:45.004 ] 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "bdev_nvme_set_hotplug", 00:22:45.004 "params": { 00:22:45.004 "period_us": 100000, 00:22:45.004 "enable": false 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "bdev_malloc_create", 00:22:45.004 "params": { 00:22:45.004 "name": "malloc0", 00:22:45.004 "num_blocks": 8192, 00:22:45.004 "block_size": 4096, 00:22:45.004 "physical_block_size": 4096, 00:22:45.004 "uuid": "539de479-68ed-4193-a66a-c746905a6df2", 00:22:45.004 "optimal_io_boundary": 0, 00:22:45.004 "md_size": 0, 00:22:45.004 "dif_type": 0, 00:22:45.004 "dif_is_head_of_md": false, 00:22:45.004 "dif_pi_format": 0 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "bdev_wait_for_examine" 00:22:45.004 } 00:22:45.004 ] 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "subsystem": "nbd", 00:22:45.004 "config": [] 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "subsystem": "scheduler", 00:22:45.004 "config": [ 00:22:45.004 { 00:22:45.004 "method": "framework_set_scheduler", 00:22:45.004 "params": { 00:22:45.004 "name": "static" 00:22:45.004 } 00:22:45.004 } 00:22:45.004 ] 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "subsystem": "nvmf", 00:22:45.004 "config": [ 00:22:45.004 { 00:22:45.004 "method": "nvmf_set_config", 00:22:45.004 "params": { 00:22:45.004 "discovery_filter": "match_any", 00:22:45.004 "admin_cmd_passthru": { 00:22:45.004 "identify_ctrlr": false 00:22:45.004 } 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "nvmf_set_max_subsystems", 00:22:45.004 "params": { 00:22:45.004 "max_subsystems": 1024 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "nvmf_set_crdt", 00:22:45.004 "params": { 00:22:45.004 "crdt1": 0, 00:22:45.004 "crdt2": 0, 00:22:45.004 "crdt3": 0 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "nvmf_create_transport", 00:22:45.004 "params": { 00:22:45.004 "trtype": "TCP", 00:22:45.004 "max_queue_depth": 128, 00:22:45.004 "max_io_qpairs_per_ctrlr": 127, 00:22:45.004 "in_capsule_data_size": 4096, 00:22:45.004 "max_io_size": 131072, 00:22:45.004 "io_unit_size": 131072, 00:22:45.004 "max_aq_depth": 128, 00:22:45.004 "num_shared_buffers": 511, 00:22:45.004 "buf_cache_size": 4294967295, 00:22:45.004 "dif_insert_or_strip": false, 00:22:45.004 "zcopy": false, 00:22:45.004 "c2h_success": false, 00:22:45.004 "sock_priority": 0, 00:22:45.004 "abort_timeout_sec": 1, 00:22:45.004 "ack_timeout": 0, 00:22:45.004 "data_wr_pool_size": 0 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "nvmf_create_subsystem", 00:22:45.004 "params": { 00:22:45.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.004 "allow_any_host": false, 00:22:45.004 "serial_number": "SPDK00000000000001", 00:22:45.004 "model_number": "SPDK bdev Controller", 00:22:45.004 "max_namespaces": 10, 00:22:45.004 "min_cntlid": 1, 00:22:45.004 "max_cntlid": 65519, 00:22:45.004 "ana_reporting": false 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "nvmf_subsystem_add_host", 00:22:45.004 "params": { 00:22:45.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.004 "host": "nqn.2016-06.io.spdk:host1", 00:22:45.004 "psk": "/tmp/tmp.xOlCtzUide" 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "nvmf_subsystem_add_ns", 00:22:45.004 "params": { 00:22:45.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.004 "namespace": { 00:22:45.004 "nsid": 1, 00:22:45.004 "bdev_name": "malloc0", 00:22:45.004 "nguid": "539DE47968ED4193A66AC746905A6DF2", 00:22:45.004 "uuid": "539de479-68ed-4193-a66a-c746905a6df2", 00:22:45.004 "no_auto_visible": false 00:22:45.004 } 00:22:45.004 } 00:22:45.004 }, 00:22:45.004 { 00:22:45.004 "method": "nvmf_subsystem_add_listener", 00:22:45.004 "params": { 00:22:45.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.004 "listen_address": { 00:22:45.004 "trtype": "TCP", 00:22:45.004 "adrfam": "IPv4", 00:22:45.004 "traddr": "10.0.0.2", 00:22:45.004 "trsvcid": "4420" 00:22:45.004 }, 00:22:45.004 "secure_channel": true 00:22:45.004 } 00:22:45.004 } 00:22:45.004 ] 00:22:45.004 } 00:22:45.004 ] 00:22:45.004 }' 00:22:45.004 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1777421 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1777421 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1777421 ']' 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.005 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.005 [2024-07-23 06:18:38.233980] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:45.005 [2024-07-23 06:18:38.234080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.005 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.005 [2024-07-23 06:18:38.270355] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:45.005 [2024-07-23 06:18:38.301926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.320 [2024-07-23 06:18:38.392737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.320 [2024-07-23 06:18:38.392795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.320 [2024-07-23 06:18:38.392823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.320 [2024-07-23 06:18:38.392837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.320 [2024-07-23 06:18:38.392849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.320 [2024-07-23 06:18:38.392934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.320 [2024-07-23 06:18:38.632439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.320 [2024-07-23 06:18:38.653464] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.578 [2024-07-23 06:18:38.669533] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.578 [2024-07-23 06:18:38.669781] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1777482 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1777482 /var/tmp/bdevperf.sock 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1777482 ']' 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.148 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:46.148 "subsystems": [ 00:22:46.148 { 00:22:46.148 "subsystem": "keyring", 00:22:46.148 "config": [] 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "subsystem": "iobuf", 00:22:46.148 "config": [ 00:22:46.148 { 00:22:46.148 "method": "iobuf_set_options", 00:22:46.148 "params": { 00:22:46.148 "small_pool_count": 8192, 00:22:46.148 "large_pool_count": 1024, 00:22:46.148 "small_bufsize": 8192, 00:22:46.148 "large_bufsize": 135168 00:22:46.148 } 00:22:46.148 } 00:22:46.148 ] 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "subsystem": "sock", 00:22:46.148 "config": [ 00:22:46.148 { 00:22:46.148 "method": "sock_set_default_impl", 00:22:46.148 "params": { 00:22:46.148 "impl_name": "posix" 00:22:46.148 } 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "method": "sock_impl_set_options", 00:22:46.148 "params": { 00:22:46.148 "impl_name": "ssl", 00:22:46.148 "recv_buf_size": 4096, 00:22:46.148 "send_buf_size": 4096, 00:22:46.148 "enable_recv_pipe": true, 00:22:46.148 "enable_quickack": false, 00:22:46.148 "enable_placement_id": 0, 00:22:46.148 "enable_zerocopy_send_server": true, 00:22:46.148 "enable_zerocopy_send_client": false, 00:22:46.148 "zerocopy_threshold": 0, 00:22:46.148 "tls_version": 0, 00:22:46.148 "enable_ktls": false 00:22:46.148 } 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "method": "sock_impl_set_options", 00:22:46.148 "params": { 00:22:46.148 "impl_name": "posix", 00:22:46.148 "recv_buf_size": 2097152, 00:22:46.148 "send_buf_size": 2097152, 00:22:46.148 "enable_recv_pipe": true, 00:22:46.148 "enable_quickack": false, 00:22:46.148 "enable_placement_id": 0, 00:22:46.148 "enable_zerocopy_send_server": true, 00:22:46.148 "enable_zerocopy_send_client": false, 00:22:46.148 "zerocopy_threshold": 0, 00:22:46.148 "tls_version": 0, 00:22:46.148 "enable_ktls": false 00:22:46.148 } 00:22:46.148 } 00:22:46.148 ] 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "subsystem": "vmd", 00:22:46.148 "config": [] 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "subsystem": "accel", 00:22:46.148 "config": [ 00:22:46.148 { 00:22:46.148 "method": "accel_set_options", 00:22:46.148 "params": { 00:22:46.148 "small_cache_size": 128, 00:22:46.148 "large_cache_size": 16, 00:22:46.148 "task_count": 2048, 00:22:46.148 "sequence_count": 2048, 00:22:46.148 "buf_count": 2048 00:22:46.148 } 00:22:46.148 } 00:22:46.148 ] 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "subsystem": "bdev", 00:22:46.148 "config": [ 00:22:46.148 { 00:22:46.148 "method": "bdev_set_options", 00:22:46.148 "params": { 00:22:46.148 "bdev_io_pool_size": 65535, 00:22:46.148 "bdev_io_cache_size": 256, 00:22:46.148 "bdev_auto_examine": true, 00:22:46.148 "iobuf_small_cache_size": 128, 00:22:46.148 "iobuf_large_cache_size": 16 00:22:46.148 } 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "method": "bdev_raid_set_options", 00:22:46.148 "params": { 00:22:46.148 "process_window_size_kb": 1024, 00:22:46.148 "process_max_bandwidth_mb_sec": 0 00:22:46.148 } 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "method": "bdev_iscsi_set_options", 00:22:46.148 "params": { 00:22:46.148 "timeout_sec": 30 00:22:46.148 } 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "method": "bdev_nvme_set_options", 00:22:46.148 "params": { 00:22:46.148 "action_on_timeout": "none", 00:22:46.148 "timeout_us": 0, 00:22:46.148 "timeout_admin_us": 0, 00:22:46.148 "keep_alive_timeout_ms": 10000, 00:22:46.148 "arbitration_burst": 0, 00:22:46.148 "low_priority_weight": 0, 00:22:46.148 "medium_priority_weight": 0, 00:22:46.148 "high_priority_weight": 0, 00:22:46.148 "nvme_adminq_poll_period_us": 10000, 00:22:46.148 "nvme_ioq_poll_period_us": 0, 00:22:46.148 "io_queue_requests": 512, 00:22:46.148 "delay_cmd_submit": true, 00:22:46.148 "transport_retry_count": 4, 00:22:46.148 "bdev_retry_count": 3, 00:22:46.148 "transport_ack_timeout": 0, 00:22:46.148 "ctrlr_loss_timeout_sec": 0, 00:22:46.148 "reconnect_delay_sec": 0, 00:22:46.148 "fast_io_fail_timeout_sec": 0, 00:22:46.148 "disable_auto_failback": false, 00:22:46.148 "generate_uuids": false, 00:22:46.148 "transport_tos": 0, 00:22:46.148 "nvme_error_stat": false, 00:22:46.148 "rdma_srq_size": 0, 00:22:46.148 "io_path_stat": false, 00:22:46.148 "allow_accel_sequence": false, 00:22:46.148 "rdma_max_cq_size": 0, 00:22:46.148 "rdma_cm_event_timeout_ms": 0, 00:22:46.148 "dhchap_digests": [ 00:22:46.148 "sha256", 00:22:46.148 "sha384", 00:22:46.148 "sha512" 00:22:46.148 ], 00:22:46.148 "dhchap_dhgroups": [ 00:22:46.148 "null", 00:22:46.148 "ffdhe2048", 00:22:46.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.148 "ffdhe3072", 00:22:46.148 "ffdhe4096", 00:22:46.148 "ffdhe6144", 00:22:46.148 "ffdhe8192" 00:22:46.148 ] 00:22:46.148 } 00:22:46.148 }, 00:22:46.148 { 00:22:46.148 "method": "bdev_nvme_attach_controller", 00:22:46.148 "params": { 00:22:46.148 "name": "TLSTEST", 00:22:46.148 "trtype": "TCP", 00:22:46.148 "adrfam": "IPv4", 00:22:46.149 "traddr": "10.0.0.2", 00:22:46.149 "trsvcid": "4420", 00:22:46.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.149 "prchk_reftag": false, 00:22:46.149 "prchk_guard": false, 00:22:46.149 "ctrlr_loss_timeout_sec": 0, 00:22:46.149 "reconnect_delay_sec": 0, 00:22:46.149 "fast_io_fail_timeout_sec": 0, 00:22:46.149 "psk": "/tmp/tmp.xOlCtzUide", 00:22:46.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.149 "hdgst": false, 00:22:46.149 "ddgst": false 00:22:46.149 } 00:22:46.149 }, 00:22:46.149 { 00:22:46.149 "method": "bdev_nvme_set_hotplug", 00:22:46.149 "params": { 00:22:46.149 "period_us": 100000, 00:22:46.149 "enable": false 00:22:46.149 } 00:22:46.149 }, 00:22:46.149 { 00:22:46.149 "method": "bdev_wait_for_examine" 00:22:46.149 } 00:22:46.149 ] 00:22:46.149 }, 00:22:46.149 { 00:22:46.149 "subsystem": "nbd", 00:22:46.149 "config": [] 00:22:46.149 } 00:22:46.149 ] 00:22:46.149 }' 00:22:46.149 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.149 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.149 [2024-07-23 06:18:39.280566] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:46.149 [2024-07-23 06:18:39.280684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777482 ] 00:22:46.149 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.149 [2024-07-23 06:18:39.313016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:46.149 [2024-07-23 06:18:39.340643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.149 [2024-07-23 06:18:39.424230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.409 [2024-07-23 06:18:39.591744] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.409 [2024-07-23 06:18:39.591887] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:46.975 06:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.975 06:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:46.975 06:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:47.235 Running I/O for 10 seconds... 00:22:57.224 00:22:57.224 Latency(us) 00:22:57.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.224 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:57.224 Verification LBA range: start 0x0 length 0x2000 00:22:57.224 TLSTESTn1 : 10.08 1193.76 4.66 0.00 0.00 106844.40 8495.41 89711.50 00:22:57.224 =================================================================================================================== 00:22:57.224 Total : 1193.76 4.66 0.00 0.00 106844.40 8495.41 89711.50 00:22:57.224 0 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1777482 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1777482 ']' 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1777482 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1777482 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1777482' 00:22:57.224 killing process with pid 1777482 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1777482 00:22:57.224 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.224 00:22:57.224 Latency(us) 00:22:57.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.224 =================================================================================================================== 00:22:57.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.224 [2024-07-23 06:18:50.511195] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:57.224 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1777482 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1777421 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1777421 ']' 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1777421 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1777421 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1777421' 00:22:57.485 killing process with pid 1777421 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1777421 00:22:57.485 [2024-07-23 06:18:50.754106] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:57.485 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1777421 00:22:57.744 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:57.744 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.744 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.744 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1778904 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1778904 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1778904 ']' 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.744 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.744 [2024-07-23 06:18:51.046995] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:57.745 [2024-07-23 06:18:51.047067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.745 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.745 [2024-07-23 06:18:51.088436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:58.003 [2024-07-23 06:18:51.124227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.003 [2024-07-23 06:18:51.214657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.003 [2024-07-23 06:18:51.214713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.003 [2024-07-23 06:18:51.214743] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.003 [2024-07-23 06:18:51.214759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.003 [2024-07-23 06:18:51.214773] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.003 [2024-07-23 06:18:51.214800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.003 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.003 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:58.003 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.003 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.003 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.262 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.262 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.xOlCtzUide 00:22:58.262 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xOlCtzUide 00:22:58.262 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:58.520 [2024-07-23 06:18:51.615165] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.520 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:58.778 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:59.036 [2024-07-23 06:18:52.164591] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.036 [2024-07-23 06:18:52.164828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.036 06:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:59.294 malloc0 00:22:59.294 06:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:59.552 06:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOlCtzUide 00:22:59.810 [2024-07-23 06:18:53.021395] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1779189 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1779189 /var/tmp/bdevperf.sock 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1779189 ']' 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.810 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.810 [2024-07-23 06:18:53.078440] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:59.810 [2024-07-23 06:18:53.078529] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779189 ] 00:22:59.810 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.810 [2024-07-23 06:18:53.110122] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:59.810 [2024-07-23 06:18:53.137285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.077 [2024-07-23 06:18:53.222385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.077 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.077 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:00.077 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xOlCtzUide 00:23:00.337 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:00.595 [2024-07-23 06:18:53.787996] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.595 nvme0n1 00:23:00.595 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:00.855 Running I/O for 1 seconds... 00:23:01.794 00:23:01.795 Latency(us) 00:23:01.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.795 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.795 Verification LBA range: start 0x0 length 0x2000 00:23:01.795 nvme0n1 : 1.06 1792.72 7.00 0.00 0.00 69658.63 6213.78 100197.26 00:23:01.795 =================================================================================================================== 00:23:01.795 Total : 1792.72 7.00 0.00 0.00 69658.63 6213.78 100197.26 00:23:01.795 0 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1779189 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1779189 ']' 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1779189 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1779189 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1779189' 00:23:01.795 killing process with pid 1779189 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1779189 00:23:01.795 Received shutdown signal, test time was about 1.000000 seconds 00:23:01.795 00:23:01.795 Latency(us) 00:23:01.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.795 =================================================================================================================== 00:23:01.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.795 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1779189 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1778904 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1778904 ']' 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1778904 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1778904 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1778904' 00:23:02.055 killing process with pid 1778904 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1778904 00:23:02.055 [2024-07-23 06:18:55.343357] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:02.055 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1778904 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1779470 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1779470 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1779470 ']' 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.317 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.577 [2024-07-23 06:18:55.662752] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:23:02.577 [2024-07-23 06:18:55.662849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.577 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.577 [2024-07-23 06:18:55.699576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:02.577 [2024-07-23 06:18:55.736577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.577 [2024-07-23 06:18:55.817563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.577 [2024-07-23 06:18:55.817644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.577 [2024-07-23 06:18:55.817670] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.577 [2024-07-23 06:18:55.817682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.577 [2024-07-23 06:18:55.817692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.577 [2024-07-23 06:18:55.817717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.836 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.836 [2024-07-23 06:18:55.956421] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.836 malloc0 00:23:02.836 [2024-07-23 06:18:55.988495] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.836 [2024-07-23 06:18:55.998826] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1779491 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1779491 /var/tmp/bdevperf.sock 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1779491 ']' 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.836 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.836 [2024-07-23 06:18:56.063230] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:23:02.836 [2024-07-23 06:18:56.063297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779491 ] 00:23:02.836 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.836 [2024-07-23 06:18:56.095164] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:02.836 [2024-07-23 06:18:56.120370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.094 [2024-07-23 06:18:56.203970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.094 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.094 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:03.094 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xOlCtzUide 00:23:03.351 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:03.608 [2024-07-23 06:18:56.777334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.608 nvme0n1 00:23:03.608 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.865 Running I/O for 1 seconds... 00:23:04.803 00:23:04.803 Latency(us) 00:23:04.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.803 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:04.803 Verification LBA range: start 0x0 length 0x2000 00:23:04.803 nvme0n1 : 1.06 1894.88 7.40 0.00 0.00 65933.34 6456.51 93983.48 00:23:04.803 =================================================================================================================== 00:23:04.803 Total : 1894.88 7.40 0.00 0.00 65933.34 6456.51 93983.48 00:23:04.803 0 00:23:04.803 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:04.803 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.803 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.062 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.062 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:05.062 "subsystems": [ 00:23:05.062 { 00:23:05.062 "subsystem": "keyring", 00:23:05.062 "config": [ 00:23:05.062 { 00:23:05.062 "method": "keyring_file_add_key", 00:23:05.062 "params": { 00:23:05.062 "name": "key0", 00:23:05.062 "path": "/tmp/tmp.xOlCtzUide" 00:23:05.062 } 00:23:05.062 } 00:23:05.062 ] 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "subsystem": "iobuf", 00:23:05.062 "config": [ 00:23:05.062 { 00:23:05.062 "method": "iobuf_set_options", 00:23:05.062 "params": { 00:23:05.062 "small_pool_count": 8192, 00:23:05.062 "large_pool_count": 1024, 00:23:05.062 "small_bufsize": 8192, 00:23:05.062 "large_bufsize": 135168 00:23:05.062 } 00:23:05.062 } 00:23:05.062 ] 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "subsystem": "sock", 00:23:05.062 "config": [ 00:23:05.062 { 00:23:05.062 "method": "sock_set_default_impl", 00:23:05.062 "params": { 00:23:05.062 "impl_name": "posix" 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "sock_impl_set_options", 00:23:05.062 "params": { 00:23:05.062 "impl_name": "ssl", 00:23:05.062 "recv_buf_size": 4096, 00:23:05.062 "send_buf_size": 4096, 00:23:05.062 "enable_recv_pipe": true, 00:23:05.062 "enable_quickack": false, 00:23:05.062 "enable_placement_id": 0, 00:23:05.062 "enable_zerocopy_send_server": true, 00:23:05.062 "enable_zerocopy_send_client": false, 00:23:05.062 "zerocopy_threshold": 0, 00:23:05.062 "tls_version": 0, 00:23:05.062 "enable_ktls": false 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "sock_impl_set_options", 00:23:05.062 "params": { 00:23:05.062 "impl_name": "posix", 00:23:05.062 "recv_buf_size": 2097152, 00:23:05.062 "send_buf_size": 2097152, 00:23:05.062 "enable_recv_pipe": true, 00:23:05.062 "enable_quickack": false, 00:23:05.062 "enable_placement_id": 0, 00:23:05.062 "enable_zerocopy_send_server": true, 00:23:05.062 "enable_zerocopy_send_client": false, 00:23:05.062 "zerocopy_threshold": 0, 00:23:05.062 "tls_version": 0, 00:23:05.062 "enable_ktls": false 00:23:05.062 } 00:23:05.062 } 00:23:05.062 ] 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "subsystem": "vmd", 00:23:05.062 "config": [] 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "subsystem": "accel", 00:23:05.062 "config": [ 00:23:05.062 { 00:23:05.062 "method": "accel_set_options", 00:23:05.062 "params": { 00:23:05.062 "small_cache_size": 128, 00:23:05.062 "large_cache_size": 16, 00:23:05.062 "task_count": 2048, 00:23:05.062 "sequence_count": 2048, 00:23:05.062 "buf_count": 2048 00:23:05.062 } 00:23:05.062 } 00:23:05.062 ] 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "subsystem": "bdev", 00:23:05.062 "config": [ 00:23:05.062 { 00:23:05.062 "method": "bdev_set_options", 00:23:05.062 "params": { 00:23:05.062 "bdev_io_pool_size": 65535, 00:23:05.062 "bdev_io_cache_size": 256, 00:23:05.062 "bdev_auto_examine": true, 00:23:05.062 "iobuf_small_cache_size": 128, 00:23:05.062 "iobuf_large_cache_size": 16 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "bdev_raid_set_options", 00:23:05.062 "params": { 00:23:05.062 "process_window_size_kb": 1024, 00:23:05.062 "process_max_bandwidth_mb_sec": 0 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "bdev_iscsi_set_options", 00:23:05.062 "params": { 00:23:05.062 "timeout_sec": 30 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "bdev_nvme_set_options", 00:23:05.062 "params": { 00:23:05.062 "action_on_timeout": "none", 00:23:05.062 "timeout_us": 0, 00:23:05.062 "timeout_admin_us": 0, 00:23:05.062 "keep_alive_timeout_ms": 10000, 00:23:05.062 "arbitration_burst": 0, 00:23:05.062 "low_priority_weight": 0, 00:23:05.062 "medium_priority_weight": 0, 00:23:05.062 "high_priority_weight": 0, 00:23:05.062 "nvme_adminq_poll_period_us": 10000, 00:23:05.062 "nvme_ioq_poll_period_us": 0, 00:23:05.062 "io_queue_requests": 0, 00:23:05.062 "delay_cmd_submit": true, 00:23:05.062 "transport_retry_count": 4, 00:23:05.062 "bdev_retry_count": 3, 00:23:05.062 "transport_ack_timeout": 0, 00:23:05.062 "ctrlr_loss_timeout_sec": 0, 00:23:05.062 "reconnect_delay_sec": 0, 00:23:05.062 "fast_io_fail_timeout_sec": 0, 00:23:05.062 "disable_auto_failback": false, 00:23:05.062 "generate_uuids": false, 00:23:05.062 "transport_tos": 0, 00:23:05.062 "nvme_error_stat": false, 00:23:05.062 "rdma_srq_size": 0, 00:23:05.062 "io_path_stat": false, 00:23:05.062 "allow_accel_sequence": false, 00:23:05.062 "rdma_max_cq_size": 0, 00:23:05.062 "rdma_cm_event_timeout_ms": 0, 00:23:05.062 "dhchap_digests": [ 00:23:05.062 "sha256", 00:23:05.062 "sha384", 00:23:05.062 "sha512" 00:23:05.062 ], 00:23:05.062 "dhchap_dhgroups": [ 00:23:05.062 "null", 00:23:05.062 "ffdhe2048", 00:23:05.062 "ffdhe3072", 00:23:05.062 "ffdhe4096", 00:23:05.062 "ffdhe6144", 00:23:05.062 "ffdhe8192" 00:23:05.062 ] 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "bdev_nvme_set_hotplug", 00:23:05.062 "params": { 00:23:05.062 "period_us": 100000, 00:23:05.062 "enable": false 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "bdev_malloc_create", 00:23:05.062 "params": { 00:23:05.062 "name": "malloc0", 00:23:05.062 "num_blocks": 8192, 00:23:05.062 "block_size": 4096, 00:23:05.062 "physical_block_size": 4096, 00:23:05.062 "uuid": "7d249493-51fa-41dd-994c-d6c9c2f3d552", 00:23:05.062 "optimal_io_boundary": 0, 00:23:05.062 "md_size": 0, 00:23:05.062 "dif_type": 0, 00:23:05.062 "dif_is_head_of_md": false, 00:23:05.062 "dif_pi_format": 0 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "bdev_wait_for_examine" 00:23:05.062 } 00:23:05.062 ] 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "subsystem": "nbd", 00:23:05.062 "config": [] 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "subsystem": "scheduler", 00:23:05.062 "config": [ 00:23:05.062 { 00:23:05.062 "method": "framework_set_scheduler", 00:23:05.062 "params": { 00:23:05.062 "name": "static" 00:23:05.062 } 00:23:05.062 } 00:23:05.062 ] 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "subsystem": "nvmf", 00:23:05.062 "config": [ 00:23:05.062 { 00:23:05.062 "method": "nvmf_set_config", 00:23:05.062 "params": { 00:23:05.062 "discovery_filter": "match_any", 00:23:05.062 "admin_cmd_passthru": { 00:23:05.062 "identify_ctrlr": false 00:23:05.062 } 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "nvmf_set_max_subsystems", 00:23:05.062 "params": { 00:23:05.062 "max_subsystems": 1024 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "nvmf_set_crdt", 00:23:05.062 "params": { 00:23:05.062 "crdt1": 0, 00:23:05.062 "crdt2": 0, 00:23:05.062 "crdt3": 0 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "nvmf_create_transport", 00:23:05.062 "params": { 00:23:05.062 "trtype": "TCP", 00:23:05.062 "max_queue_depth": 128, 00:23:05.062 "max_io_qpairs_per_ctrlr": 127, 00:23:05.062 "in_capsule_data_size": 4096, 00:23:05.062 "max_io_size": 131072, 00:23:05.062 "io_unit_size": 131072, 00:23:05.062 "max_aq_depth": 128, 00:23:05.062 "num_shared_buffers": 511, 00:23:05.062 "buf_cache_size": 4294967295, 00:23:05.062 "dif_insert_or_strip": false, 00:23:05.062 "zcopy": false, 00:23:05.062 "c2h_success": false, 00:23:05.062 "sock_priority": 0, 00:23:05.062 "abort_timeout_sec": 1, 00:23:05.062 "ack_timeout": 0, 00:23:05.062 "data_wr_pool_size": 0 00:23:05.062 } 00:23:05.062 }, 00:23:05.062 { 00:23:05.062 "method": "nvmf_create_subsystem", 00:23:05.062 "params": { 00:23:05.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.062 "allow_any_host": false, 00:23:05.063 "serial_number": "00000000000000000000", 00:23:05.063 "model_number": "SPDK bdev Controller", 00:23:05.063 "max_namespaces": 32, 00:23:05.063 "min_cntlid": 1, 00:23:05.063 "max_cntlid": 65519, 00:23:05.063 "ana_reporting": false 00:23:05.063 } 00:23:05.063 }, 00:23:05.063 { 00:23:05.063 "method": "nvmf_subsystem_add_host", 00:23:05.063 "params": { 00:23:05.063 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.063 "host": "nqn.2016-06.io.spdk:host1", 00:23:05.063 "psk": "key0" 00:23:05.063 } 00:23:05.063 }, 00:23:05.063 { 00:23:05.063 "method": "nvmf_subsystem_add_ns", 00:23:05.063 "params": { 00:23:05.063 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.063 "namespace": { 00:23:05.063 "nsid": 1, 00:23:05.063 "bdev_name": "malloc0", 00:23:05.063 "nguid": "7D24949351FA41DD994CD6C9C2F3D552", 00:23:05.063 "uuid": "7d249493-51fa-41dd-994c-d6c9c2f3d552", 00:23:05.063 "no_auto_visible": false 00:23:05.063 } 00:23:05.063 } 00:23:05.063 }, 00:23:05.063 { 00:23:05.063 "method": "nvmf_subsystem_add_listener", 00:23:05.063 "params": { 00:23:05.063 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.063 "listen_address": { 00:23:05.063 "trtype": "TCP", 00:23:05.063 "adrfam": "IPv4", 00:23:05.063 "traddr": "10.0.0.2", 00:23:05.063 "trsvcid": "4420" 00:23:05.063 }, 00:23:05.063 "secure_channel": false, 00:23:05.063 "sock_impl": "ssl" 00:23:05.063 } 00:23:05.063 } 00:23:05.063 ] 00:23:05.063 } 00:23:05.063 ] 00:23:05.063 }' 00:23:05.063 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:05.321 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:05.321 "subsystems": [ 00:23:05.321 { 00:23:05.321 "subsystem": "keyring", 00:23:05.321 "config": [ 00:23:05.321 { 00:23:05.321 "method": "keyring_file_add_key", 00:23:05.321 "params": { 00:23:05.321 "name": "key0", 00:23:05.321 "path": "/tmp/tmp.xOlCtzUide" 00:23:05.321 } 00:23:05.321 } 00:23:05.321 ] 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "subsystem": "iobuf", 00:23:05.321 "config": [ 00:23:05.321 { 00:23:05.321 "method": "iobuf_set_options", 00:23:05.321 "params": { 00:23:05.321 "small_pool_count": 8192, 00:23:05.321 "large_pool_count": 1024, 00:23:05.321 "small_bufsize": 8192, 00:23:05.321 "large_bufsize": 135168 00:23:05.321 } 00:23:05.321 } 00:23:05.321 ] 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "subsystem": "sock", 00:23:05.321 "config": [ 00:23:05.321 { 00:23:05.321 "method": "sock_set_default_impl", 00:23:05.321 "params": { 00:23:05.321 "impl_name": "posix" 00:23:05.321 } 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "method": "sock_impl_set_options", 00:23:05.321 "params": { 00:23:05.321 "impl_name": "ssl", 00:23:05.321 "recv_buf_size": 4096, 00:23:05.321 "send_buf_size": 4096, 00:23:05.321 "enable_recv_pipe": true, 00:23:05.321 "enable_quickack": false, 00:23:05.321 "enable_placement_id": 0, 00:23:05.321 "enable_zerocopy_send_server": true, 00:23:05.321 "enable_zerocopy_send_client": false, 00:23:05.321 "zerocopy_threshold": 0, 00:23:05.321 "tls_version": 0, 00:23:05.321 "enable_ktls": false 00:23:05.321 } 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "method": "sock_impl_set_options", 00:23:05.321 "params": { 00:23:05.321 "impl_name": "posix", 00:23:05.321 "recv_buf_size": 2097152, 00:23:05.321 "send_buf_size": 2097152, 00:23:05.321 "enable_recv_pipe": true, 00:23:05.321 "enable_quickack": false, 00:23:05.321 "enable_placement_id": 0, 00:23:05.321 "enable_zerocopy_send_server": true, 00:23:05.321 "enable_zerocopy_send_client": false, 00:23:05.321 "zerocopy_threshold": 0, 00:23:05.321 "tls_version": 0, 00:23:05.321 "enable_ktls": false 00:23:05.321 } 00:23:05.321 } 00:23:05.321 ] 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "subsystem": "vmd", 00:23:05.321 "config": [] 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "subsystem": "accel", 00:23:05.321 "config": [ 00:23:05.321 { 00:23:05.321 "method": "accel_set_options", 00:23:05.321 "params": { 00:23:05.321 "small_cache_size": 128, 00:23:05.321 "large_cache_size": 16, 00:23:05.321 "task_count": 2048, 00:23:05.321 "sequence_count": 2048, 00:23:05.321 "buf_count": 2048 00:23:05.321 } 00:23:05.321 } 00:23:05.321 ] 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "subsystem": "bdev", 00:23:05.321 "config": [ 00:23:05.321 { 00:23:05.321 "method": "bdev_set_options", 00:23:05.321 "params": { 00:23:05.321 "bdev_io_pool_size": 65535, 00:23:05.321 "bdev_io_cache_size": 256, 00:23:05.321 "bdev_auto_examine": true, 00:23:05.321 "iobuf_small_cache_size": 128, 00:23:05.321 "iobuf_large_cache_size": 16 00:23:05.321 } 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "method": "bdev_raid_set_options", 00:23:05.321 "params": { 00:23:05.321 "process_window_size_kb": 1024, 00:23:05.321 "process_max_bandwidth_mb_sec": 0 00:23:05.321 } 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "method": "bdev_iscsi_set_options", 00:23:05.321 "params": { 00:23:05.321 "timeout_sec": 30 00:23:05.321 } 00:23:05.321 }, 00:23:05.321 { 00:23:05.321 "method": "bdev_nvme_set_options", 00:23:05.321 "params": { 00:23:05.321 "action_on_timeout": "none", 00:23:05.321 "timeout_us": 0, 00:23:05.321 "timeout_admin_us": 0, 00:23:05.321 "keep_alive_timeout_ms": 10000, 00:23:05.321 "arbitration_burst": 0, 00:23:05.321 "low_priority_weight": 0, 00:23:05.321 "medium_priority_weight": 0, 00:23:05.321 "high_priority_weight": 0, 00:23:05.321 "nvme_adminq_poll_period_us": 10000, 00:23:05.321 "nvme_ioq_poll_period_us": 0, 00:23:05.321 "io_queue_requests": 512, 00:23:05.321 "delay_cmd_submit": true, 00:23:05.321 "transport_retry_count": 4, 00:23:05.321 "bdev_retry_count": 3, 00:23:05.321 "transport_ack_timeout": 0, 00:23:05.321 "ctrlr_loss_timeout_sec": 0, 00:23:05.321 "reconnect_delay_sec": 0, 00:23:05.321 "fast_io_fail_timeout_sec": 0, 00:23:05.321 "disable_auto_failback": false, 00:23:05.321 "generate_uuids": false, 00:23:05.321 "transport_tos": 0, 00:23:05.321 "nvme_error_stat": false, 00:23:05.321 "rdma_srq_size": 0, 00:23:05.321 "io_path_stat": false, 00:23:05.321 "allow_accel_sequence": false, 00:23:05.321 "rdma_max_cq_size": 0, 00:23:05.321 "rdma_cm_event_timeout_ms": 0, 00:23:05.321 "dhchap_digests": [ 00:23:05.321 "sha256", 00:23:05.321 "sha384", 00:23:05.321 "sha512" 00:23:05.321 ], 00:23:05.321 "dhchap_dhgroups": [ 00:23:05.321 "null", 00:23:05.321 "ffdhe2048", 00:23:05.321 "ffdhe3072", 00:23:05.321 "ffdhe4096", 00:23:05.321 "ffdhe6144", 00:23:05.321 "ffdhe8192" 00:23:05.321 ] 00:23:05.321 } 00:23:05.321 }, 00:23:05.322 { 00:23:05.322 "method": "bdev_nvme_attach_controller", 00:23:05.322 "params": { 00:23:05.322 "name": "nvme0", 00:23:05.322 "trtype": "TCP", 00:23:05.322 "adrfam": "IPv4", 00:23:05.322 "traddr": "10.0.0.2", 00:23:05.322 "trsvcid": "4420", 00:23:05.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.322 "prchk_reftag": false, 00:23:05.322 "prchk_guard": false, 00:23:05.322 "ctrlr_loss_timeout_sec": 0, 00:23:05.322 "reconnect_delay_sec": 0, 00:23:05.322 "fast_io_fail_timeout_sec": 0, 00:23:05.322 "psk": "key0", 00:23:05.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.322 "hdgst": false, 00:23:05.322 "ddgst": false 00:23:05.322 } 00:23:05.322 }, 00:23:05.322 { 00:23:05.322 "method": "bdev_nvme_set_hotplug", 00:23:05.322 "params": { 00:23:05.322 "period_us": 100000, 00:23:05.322 "enable": false 00:23:05.322 } 00:23:05.322 }, 00:23:05.322 { 00:23:05.322 "method": "bdev_enable_histogram", 00:23:05.322 "params": { 00:23:05.322 "name": "nvme0n1", 00:23:05.322 "enable": true 00:23:05.322 } 00:23:05.322 }, 00:23:05.322 { 00:23:05.322 "method": "bdev_wait_for_examine" 00:23:05.322 } 00:23:05.322 ] 00:23:05.322 }, 00:23:05.322 { 00:23:05.322 "subsystem": "nbd", 00:23:05.322 "config": [] 00:23:05.322 } 00:23:05.322 ] 00:23:05.322 }' 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1779491 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1779491 ']' 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1779491 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1779491 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1779491' 00:23:05.322 killing process with pid 1779491 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1779491 00:23:05.322 Received shutdown signal, test time was about 1.000000 seconds 00:23:05.322 00:23:05.322 Latency(us) 00:23:05.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.322 =================================================================================================================== 00:23:05.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.322 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1779491 00:23:05.579 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1779470 00:23:05.579 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1779470 ']' 00:23:05.579 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1779470 00:23:05.579 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:05.579 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:05.580 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1779470 00:23:05.580 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:05.580 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:05.580 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1779470' 00:23:05.580 killing process with pid 1779470 00:23:05.580 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1779470 00:23:05.580 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1779470 00:23:05.838 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:05.838 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.838 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:05.838 "subsystems": [ 00:23:05.838 { 00:23:05.838 "subsystem": "keyring", 00:23:05.838 "config": [ 00:23:05.838 { 00:23:05.838 "method": "keyring_file_add_key", 00:23:05.838 "params": { 00:23:05.838 "name": "key0", 00:23:05.838 "path": "/tmp/tmp.xOlCtzUide" 00:23:05.838 } 00:23:05.838 } 00:23:05.838 ] 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "subsystem": "iobuf", 00:23:05.838 "config": [ 00:23:05.838 { 00:23:05.838 "method": "iobuf_set_options", 00:23:05.838 "params": { 00:23:05.838 "small_pool_count": 8192, 00:23:05.838 "large_pool_count": 1024, 00:23:05.838 "small_bufsize": 8192, 00:23:05.838 "large_bufsize": 135168 00:23:05.838 } 00:23:05.838 } 00:23:05.838 ] 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "subsystem": "sock", 00:23:05.838 "config": [ 00:23:05.838 { 00:23:05.838 "method": "sock_set_default_impl", 00:23:05.838 "params": { 00:23:05.838 "impl_name": "posix" 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "sock_impl_set_options", 00:23:05.838 "params": { 00:23:05.838 "impl_name": "ssl", 00:23:05.838 "recv_buf_size": 4096, 00:23:05.838 "send_buf_size": 4096, 00:23:05.838 "enable_recv_pipe": true, 00:23:05.838 "enable_quickack": false, 00:23:05.838 "enable_placement_id": 0, 00:23:05.838 "enable_zerocopy_send_server": true, 00:23:05.838 "enable_zerocopy_send_client": false, 00:23:05.838 "zerocopy_threshold": 0, 00:23:05.838 "tls_version": 0, 00:23:05.838 "enable_ktls": false 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "sock_impl_set_options", 00:23:05.838 "params": { 00:23:05.838 "impl_name": "posix", 00:23:05.838 "recv_buf_size": 2097152, 00:23:05.838 "send_buf_size": 2097152, 00:23:05.838 "enable_recv_pipe": true, 00:23:05.838 "enable_quickack": false, 00:23:05.838 "enable_placement_id": 0, 00:23:05.838 "enable_zerocopy_send_server": true, 00:23:05.838 "enable_zerocopy_send_client": false, 00:23:05.838 "zerocopy_threshold": 0, 00:23:05.838 "tls_version": 0, 00:23:05.838 "enable_ktls": false 00:23:05.838 } 00:23:05.838 } 00:23:05.838 ] 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "subsystem": "vmd", 00:23:05.838 "config": [] 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "subsystem": "accel", 00:23:05.838 "config": [ 00:23:05.838 { 00:23:05.838 "method": "accel_set_options", 00:23:05.838 "params": { 00:23:05.838 "small_cache_size": 128, 00:23:05.838 "large_cache_size": 16, 00:23:05.838 "task_count": 2048, 00:23:05.838 "sequence_count": 2048, 00:23:05.838 "buf_count": 2048 00:23:05.838 } 00:23:05.838 } 00:23:05.838 ] 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "subsystem": "bdev", 00:23:05.838 "config": [ 00:23:05.838 { 00:23:05.838 "method": "bdev_set_options", 00:23:05.838 "params": { 00:23:05.838 "bdev_io_pool_size": 65535, 00:23:05.838 "bdev_io_cache_size": 256, 00:23:05.838 "bdev_auto_examine": true, 00:23:05.838 "iobuf_small_cache_size": 128, 00:23:05.838 "iobuf_large_cache_size": 16 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "bdev_raid_set_options", 00:23:05.838 "params": { 00:23:05.838 "process_window_size_kb": 1024, 00:23:05.838 "process_max_bandwidth_mb_sec": 0 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "bdev_iscsi_set_options", 00:23:05.838 "params": { 00:23:05.838 "timeout_sec": 30 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "bdev_nvme_set_options", 00:23:05.838 "params": { 00:23:05.838 "action_on_timeout": "none", 00:23:05.838 "timeout_us": 0, 00:23:05.838 "timeout_admin_us": 0, 00:23:05.838 "keep_alive_timeout_ms": 10000, 00:23:05.838 "arbitration_burst": 0, 00:23:05.838 "low_priority_weight": 0, 00:23:05.838 "medium_priority_weight": 0, 00:23:05.838 "high_priority_weight": 0, 00:23:05.838 "nvme_adminq_poll_period_us": 10000, 00:23:05.838 "nvme_ioq_poll_period_us": 0, 00:23:05.838 "io_queue_requests": 0, 00:23:05.838 "delay_cmd_submit": true, 00:23:05.838 "transport_retry_count": 4, 00:23:05.838 "bdev_retry_count": 3, 00:23:05.838 "transport_ack_timeout": 0, 00:23:05.838 "ctrlr_loss_timeout_sec": 0, 00:23:05.838 "reconnect_delay_sec": 0, 00:23:05.838 "fast_io_fail_timeout_sec": 0, 00:23:05.838 "disable_auto_failback": false, 00:23:05.838 "generate_uuids": false, 00:23:05.838 "transport_tos": 0, 00:23:05.838 "nvme_error_stat": false, 00:23:05.838 "rdma_srq_size": 0, 00:23:05.838 "io_path_stat": false, 00:23:05.838 "allow_accel_sequence": false, 00:23:05.838 "rdma_max_cq_size": 0, 00:23:05.838 "rdma_cm_event_timeout_ms": 0, 00:23:05.838 "dhchap_digests": [ 00:23:05.838 "sha256", 00:23:05.838 "sha384", 00:23:05.838 "sha512" 00:23:05.838 ], 00:23:05.838 "dhchap_dhgroups": [ 00:23:05.838 "null", 00:23:05.838 "ffdhe2048", 00:23:05.838 "ffdhe3072", 00:23:05.838 "ffdhe4096", 00:23:05.838 "ffdhe6144", 00:23:05.838 "ffdhe8192" 00:23:05.838 ] 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "bdev_nvme_set_hotplug", 00:23:05.838 "params": { 00:23:05.838 "period_us": 100000, 00:23:05.838 "enable": false 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "bdev_malloc_create", 00:23:05.838 "params": { 00:23:05.838 "name": "malloc0", 00:23:05.838 "num_blocks": 8192, 00:23:05.838 "block_size": 4096, 00:23:05.838 "physical_block_size": 4096, 00:23:05.838 "uuid": "7d249493-51fa-41dd-994c-d6c9c2f3d552", 00:23:05.838 "optimal_io_boundary": 0, 00:23:05.838 "md_size": 0, 00:23:05.838 "dif_type": 0, 00:23:05.838 "dif_is_head_of_md": false, 00:23:05.838 "dif_pi_format": 0 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "bdev_wait_for_examine" 00:23:05.838 } 00:23:05.838 ] 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "subsystem": "nbd", 00:23:05.838 "config": [] 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "subsystem": "scheduler", 00:23:05.838 "config": [ 00:23:05.838 { 00:23:05.838 "method": "framework_set_scheduler", 00:23:05.838 "params": { 00:23:05.838 "name": "static" 00:23:05.838 } 00:23:05.838 } 00:23:05.838 ] 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "subsystem": "nvmf", 00:23:05.838 "config": [ 00:23:05.838 { 00:23:05.838 "method": "nvmf_set_config", 00:23:05.838 "params": { 00:23:05.838 "discovery_filter": "match_any", 00:23:05.838 "admin_cmd_passthru": { 00:23:05.838 "identify_ctrlr": false 00:23:05.838 } 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "nvmf_set_max_subsystems", 00:23:05.838 "params": { 00:23:05.838 "max_subsystems": 1024 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "nvmf_set_crdt", 00:23:05.838 "params": { 00:23:05.838 "crdt1": 0, 00:23:05.838 "crdt2": 0, 00:23:05.838 "crdt3": 0 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "nvmf_create_transport", 00:23:05.838 "params": { 00:23:05.838 "trtype": "TCP", 00:23:05.838 "max_queue_depth": 128, 00:23:05.838 "max_io_qpairs_per_ctrlr": 127, 00:23:05.838 "in_capsule_data_size": 4096, 00:23:05.838 "max_io_size": 131072, 00:23:05.838 "io_unit_size": 131072, 00:23:05.838 "max_aq_depth": 128, 00:23:05.838 "num_shared_buffers": 511, 00:23:05.838 "buf_cache_size": 4294967295, 00:23:05.838 "dif_insert_or_strip": false, 00:23:05.838 "zcopy": false, 00:23:05.838 "c2h_success": false, 00:23:05.838 "sock_priority": 0, 00:23:05.838 "abort_timeout_sec": 1, 00:23:05.838 "ack_timeout": 0, 00:23:05.838 "data_wr_pool_size": 0 00:23:05.838 } 00:23:05.838 }, 00:23:05.838 { 00:23:05.838 "method": "nvmf_create_subsystem", 00:23:05.838 "params": { 00:23:05.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.839 "allow_any_host": false, 00:23:05.839 "serial_number": "00000000000000000000", 00:23:05.839 "model_number": "SPDK bdev Controller", 00:23:05.839 "max_namespaces": 32, 00:23:05.839 "min_cntlid": 1, 00:23:05.839 "max_cntlid": 65519, 00:23:05.839 "ana_reporting": false 00:23:05.839 } 00:23:05.839 }, 00:23:05.839 { 00:23:05.839 "method": "nvmf_subsystem_add_host", 00:23:05.839 "params": { 00:23:05.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.839 "host": "nqn.2016-06.io.spdk:host1", 00:23:05.839 "psk": "key0" 00:23:05.839 } 00:23:05.839 }, 00:23:05.839 { 00:23:05.839 "method": "nvmf_subsystem_add_ns", 00:23:05.839 "params": { 00:23:05.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.839 "namespace": { 00:23:05.839 "nsid": 1, 00:23:05.839 "bdev_name": "malloc0", 00:23:05.839 "nguid": "7D24949351FA41DD994CD6C9C2F3D552", 00:23:05.839 "uuid": "7d249493-51fa-41dd-994c-d6c9c2f3d552", 00:23:05.839 "no_auto_visible": false 00:23:05.839 } 00:23:05.839 } 00:23:05.839 }, 00:23:05.839 { 00:23:05.839 "method": "nvmf_subsystem_add_listener", 00:23:05.839 "params": { 00:23:05.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.839 "listen_address": { 00:23:05.839 "trtype": "TCP", 00:23:05.839 "adrfam": "IPv4", 00:23:05.839 "traddr": "10.0.0.2", 00:23:05.839 "trsvcid": "4420" 00:23:05.839 }, 00:23:05.839 "secure_channel": false, 00:23:05.839 "sock_impl": "ssl" 00:23:05.839 } 00:23:05.839 } 00:23:05.839 ] 00:23:05.839 } 00:23:05.839 ] 00:23:05.839 }' 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1779900 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1779900 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1779900 ']' 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.839 06:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.839 [2024-07-23 06:18:59.022833] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:23:05.839 [2024-07-23 06:18:59.022924] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.839 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.839 [2024-07-23 06:18:59.059857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:05.839 [2024-07-23 06:18:59.085806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.839 [2024-07-23 06:18:59.168083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.839 [2024-07-23 06:18:59.168140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.839 [2024-07-23 06:18:59.168179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.839 [2024-07-23 06:18:59.168190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.839 [2024-07-23 06:18:59.168200] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.839 [2024-07-23 06:18:59.168272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.097 [2024-07-23 06:18:59.413353] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.357 [2024-07-23 06:18:59.450520] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.357 [2024-07-23 06:18:59.450771] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1780050 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1780050 /var/tmp/bdevperf.sock 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1780050 ']' 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.930 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:06.930 "subsystems": [ 00:23:06.930 { 00:23:06.930 "subsystem": "keyring", 00:23:06.930 "config": [ 00:23:06.930 { 00:23:06.930 "method": "keyring_file_add_key", 00:23:06.930 "params": { 00:23:06.930 "name": "key0", 00:23:06.930 "path": "/tmp/tmp.xOlCtzUide" 00:23:06.930 } 00:23:06.930 } 00:23:06.930 ] 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "subsystem": "iobuf", 00:23:06.930 "config": [ 00:23:06.930 { 00:23:06.930 "method": "iobuf_set_options", 00:23:06.930 "params": { 00:23:06.930 "small_pool_count": 8192, 00:23:06.930 "large_pool_count": 1024, 00:23:06.930 "small_bufsize": 8192, 00:23:06.930 "large_bufsize": 135168 00:23:06.930 } 00:23:06.930 } 00:23:06.930 ] 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "subsystem": "sock", 00:23:06.930 "config": [ 00:23:06.930 { 00:23:06.930 "method": "sock_set_default_impl", 00:23:06.930 "params": { 00:23:06.930 "impl_name": "posix" 00:23:06.930 } 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "method": "sock_impl_set_options", 00:23:06.930 "params": { 00:23:06.930 "impl_name": "ssl", 00:23:06.930 "recv_buf_size": 4096, 00:23:06.930 "send_buf_size": 4096, 00:23:06.930 "enable_recv_pipe": true, 00:23:06.930 "enable_quickack": false, 00:23:06.930 "enable_placement_id": 0, 00:23:06.930 "enable_zerocopy_send_server": true, 00:23:06.930 "enable_zerocopy_send_client": false, 00:23:06.930 "zerocopy_threshold": 0, 00:23:06.930 "tls_version": 0, 00:23:06.930 "enable_ktls": false 00:23:06.930 } 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "method": "sock_impl_set_options", 00:23:06.930 "params": { 00:23:06.931 "impl_name": "posix", 00:23:06.931 "recv_buf_size": 2097152, 00:23:06.931 "send_buf_size": 2097152, 00:23:06.931 "enable_recv_pipe": true, 00:23:06.931 "enable_quickack": false, 00:23:06.931 "enable_placement_id": 0, 00:23:06.931 "enable_zerocopy_send_server": true, 00:23:06.931 "enable_zerocopy_send_client": false, 00:23:06.931 "zerocopy_threshold": 0, 00:23:06.931 "tls_version": 0, 00:23:06.931 "enable_ktls": false 00:23:06.931 } 00:23:06.931 } 00:23:06.931 ] 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "subsystem": "vmd", 00:23:06.931 "config": [] 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "subsystem": "accel", 00:23:06.931 "config": [ 00:23:06.931 { 00:23:06.931 "method": "accel_set_options", 00:23:06.931 "params": { 00:23:06.931 "small_cache_size": 128, 00:23:06.931 "large_cache_size": 16, 00:23:06.931 "task_count": 2048, 00:23:06.931 "sequence_count": 2048, 00:23:06.931 "buf_count": 2048 00:23:06.931 } 00:23:06.931 } 00:23:06.931 ] 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "subsystem": "bdev", 00:23:06.931 "config": [ 00:23:06.931 { 00:23:06.931 "method": "bdev_set_options", 00:23:06.931 "params": { 00:23:06.931 "bdev_io_pool_size": 65535, 00:23:06.931 "bdev_io_cache_size": 256, 00:23:06.931 "bdev_auto_examine": true, 00:23:06.931 "iobuf_small_cache_size": 128, 00:23:06.931 "iobuf_large_cache_size": 16 00:23:06.931 } 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "method": "bdev_raid_set_options", 00:23:06.931 "params": { 00:23:06.931 "process_window_size_kb": 1024, 00:23:06.931 "process_max_bandwidth_mb_sec": 0 00:23:06.931 } 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "method": "bdev_iscsi_set_options", 00:23:06.931 "params": { 00:23:06.931 "timeout_sec": 30 00:23:06.931 } 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "method": "bdev_nvme_set_options", 00:23:06.931 "params": { 00:23:06.931 "action_on_timeout": "none", 00:23:06.931 "timeout_us": 0, 00:23:06.931 "timeout_admin_us": 0, 00:23:06.931 "keep_alive_timeout_ms": 10000, 00:23:06.931 "arbitration_burst": 0, 00:23:06.931 "low_priority_weight": 0, 00:23:06.931 "medium_priority_weight": 0, 00:23:06.931 "high_priority_weight": 0, 00:23:06.931 "nvme_adminq_poll_period_us": 10000, 00:23:06.931 "nvme_ioq_poll_period_us": 0, 00:23:06.931 "io_queue_requests": 512, 00:23:06.931 "delay_cmd_submit": true, 00:23:06.931 "transport_retry_count": 4, 00:23:06.931 "bdev_retry_count": 3, 00:23:06.931 "transport_ack_timeout": 0, 00:23:06.931 "ctrlr_loss_timeout_sec": 0, 00:23:06.931 "reconnect_delay_sec": 0, 00:23:06.931 "fast_io_fail_timeout_sec": 0, 00:23:06.931 "disable_auto_failback": false, 00:23:06.931 "generate_uuids": false, 00:23:06.931 "transport_tos": 0, 00:23:06.931 "nvme_error_stat": false, 00:23:06.931 "rdma_srq_size": 0, 00:23:06.931 "io_path_stat": false, 00:23:06.931 "allow_accel_sequence": false, 00:23:06.931 "rdma_max_cq_size": 0, 00:23:06.931 "rdma_cm_event_timeout_ms": 0, 00:23:06.931 "dhchap_digests": [ 00:23:06.931 "sha256", 00:23:06.931 "sha384", 00:23:06.931 "sha512" 00:23:06.931 ], 00:23:06.931 "dhchap_dhgroups": [ 00:23:06.931 "null", 00:23:06.931 "ffdhe2048", 00:23:06.931 "ffdhe3072", 00:23:06.931 "ffdhe4096", 00:23:06.931 "ffdhe6144", 00:23:06.931 "ffdhe8192" 00:23:06.931 ] 00:23:06.931 } 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "method": "bdev_nvme_attach_controller", 00:23:06.931 "params": { 00:23:06.931 "name": "nvme0", 00:23:06.931 "trtype": "TCP", 00:23:06.931 "adrfam": "IPv4", 00:23:06.931 "traddr": "10.0.0.2", 00:23:06.931 "trsvcid": "4420", 00:23:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.931 "prchk_reftag": false, 00:23:06.931 "prchk_guard": false, 00:23:06.931 "ctrlr_loss_timeout_sec": 0, 00:23:06.931 "reconnect_delay_sec": 0, 00:23:06.931 "fast_io_fail_timeout_sec": 0, 00:23:06.931 "psk": "key0", 00:23:06.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.931 "hdgst": false, 00:23:06.931 "ddgst": false 00:23:06.931 } 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "method": "bdev_nvme_set_hotplug", 00:23:06.931 "params": { 00:23:06.931 "period_us": 100000, 00:23:06.931 "enable": false 00:23:06.931 } 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "method": "bdev_enable_histogram", 00:23:06.931 "params": { 00:23:06.931 "name": "nvme0n1", 00:23:06.931 "enable": true 00:23:06.931 } 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "method": "bdev_wait_for_examine" 00:23:06.931 } 00:23:06.931 ] 00:23:06.931 }, 00:23:06.931 { 00:23:06.931 "subsystem": "nbd", 00:23:06.931 "config": [] 00:23:06.931 } 00:23:06.931 ] 00:23:06.931 }' 00:23:06.931 [2024-07-23 06:19:00.083036] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:23:06.931 [2024-07-23 06:19:00.083129] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780050 ] 00:23:06.931 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.931 [2024-07-23 06:19:00.121431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:06.931 [2024-07-23 06:19:00.149101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.931 [2024-07-23 06:19:00.238479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.190 [2024-07-23 06:19:00.419004] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.756 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.756 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.756 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.756 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:08.013 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.013 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.272 Running I/O for 1 seconds... 00:23:09.208 00:23:09.209 Latency(us) 00:23:09.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.209 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:09.209 Verification LBA range: start 0x0 length 0x2000 00:23:09.209 nvme0n1 : 1.06 1782.24 6.96 0.00 0.00 70038.46 6505.05 114178.28 00:23:09.209 =================================================================================================================== 00:23:09.209 Total : 1782.24 6.96 0.00 0.00 70038.46 6505.05 114178.28 00:23:09.209 0 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:09.209 nvmf_trace.0 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1780050 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1780050 ']' 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1780050 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.209 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1780050 00:23:09.466 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1780050' 00:23:09.467 killing process with pid 1780050 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1780050 00:23:09.467 Received shutdown signal, test time was about 1.000000 seconds 00:23:09.467 00:23:09.467 Latency(us) 00:23:09.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.467 =================================================================================================================== 00:23:09.467 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1780050 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.467 rmmod nvme_tcp 00:23:09.467 rmmod nvme_fabrics 00:23:09.467 rmmod nvme_keyring 00:23:09.467 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1779900 ']' 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1779900 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1779900 ']' 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1779900 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1779900 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1779900' 00:23:09.725 killing process with pid 1779900 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1779900 00:23:09.725 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1779900 00:23:09.985 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.985 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.985 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.985 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.985 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.985 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.985 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.985 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.22v00DnHfE /tmp/tmp.QyKE863fQG /tmp/tmp.xOlCtzUide 00:23:11.891 00:23:11.891 real 1m19.350s 00:23:11.891 user 1m56.927s 00:23:11.891 sys 0m29.241s 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.891 ************************************ 00:23:11.891 END TEST nvmf_tls 00:23:11.891 ************************************ 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.891 ************************************ 00:23:11.891 START TEST nvmf_fips 00:23:11.891 ************************************ 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:11.891 * Looking for test storage... 00:23:11.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.891 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.152 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:12.153 Error setting digest 00:23:12.153 0062B7CA257F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:12.153 0062B7CA257F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:12.153 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:12.154 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.154 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:14.056 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:14.056 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:14.056 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:14.056 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.056 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.314 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.314 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.314 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:14.314 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.314 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.314 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.314 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:14.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:23:14.314 00:23:14.314 --- 10.0.0.2 ping statistics --- 00:23:14.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.314 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:14.314 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:23:14.314 00:23:14.314 --- 10.0.0.1 ping statistics --- 00:23:14.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.315 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1782300 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1782300 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1782300 ']' 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.315 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:14.315 [2024-07-23 06:19:07.624256] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:23:14.315 [2024-07-23 06:19:07.624366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.574 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.574 [2024-07-23 06:19:07.663988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:14.574 [2024-07-23 06:19:07.692521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.574 [2024-07-23 06:19:07.781603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.574 [2024-07-23 06:19:07.781661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.574 [2024-07-23 06:19:07.781675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.574 [2024-07-23 06:19:07.781686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.574 [2024-07-23 06:19:07.781696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.574 [2024-07-23 06:19:07.781722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.574 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.574 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:14.574 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.574 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.574 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.832 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.090 [2024-07-23 06:19:08.196849] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.090 [2024-07-23 06:19:08.212843] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.090 [2024-07-23 06:19:08.213068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.090 [2024-07-23 06:19:08.244947] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:15.090 malloc0 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1782438 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1782438 /var/tmp/bdevperf.sock 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1782438 ']' 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.091 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:15.091 [2024-07-23 06:19:08.338546] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:23:15.091 [2024-07-23 06:19:08.338650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1782438 ] 00:23:15.091 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.091 [2024-07-23 06:19:08.370400] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:15.091 [2024-07-23 06:19:08.397104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.349 [2024-07-23 06:19:08.483096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.349 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.349 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:15.349 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:15.606 [2024-07-23 06:19:08.839545] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.606 [2024-07-23 06:19:08.839687] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:15.606 TLSTESTn1 00:23:15.606 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:15.864 Running I/O for 10 seconds... 00:23:25.826 00:23:25.826 Latency(us) 00:23:25.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.826 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:25.826 Verification LBA range: start 0x0 length 0x2000 00:23:25.826 TLSTESTn1 : 10.06 1915.66 7.48 0.00 0.00 66615.14 10243.03 97090.37 00:23:25.826 =================================================================================================================== 00:23:25.826 Total : 1915.66 7.48 0.00 0.00 66615.14 10243.03 97090.37 00:23:25.826 0 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:25.826 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:25.826 nvmf_trace.0 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1782438 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1782438 ']' 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1782438 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1782438 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1782438' 00:23:26.083 killing process with pid 1782438 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1782438 00:23:26.083 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.083 00:23:26.083 Latency(us) 00:23:26.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.083 =================================================================================================================== 00:23:26.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.083 [2024-07-23 06:19:19.257184] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:26.083 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1782438 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.344 rmmod nvme_tcp 00:23:26.344 rmmod nvme_fabrics 00:23:26.344 rmmod nvme_keyring 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1782300 ']' 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1782300 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1782300 ']' 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1782300 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1782300 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1782300' 00:23:26.344 killing process with pid 1782300 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1782300 00:23:26.344 [2024-07-23 06:19:19.557874] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:26.344 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1782300 00:23:26.605 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:26.605 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:26.605 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:26.605 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:26.605 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:26.605 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.605 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.605 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:29.140 00:23:29.140 real 0m16.691s 00:23:29.140 user 0m20.450s 00:23:29.140 sys 0m6.555s 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:29.140 ************************************ 00:23:29.140 END TEST nvmf_fips 00:23:29.140 ************************************ 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:29.140 ************************************ 00:23:29.140 START TEST nvmf_fuzz 00:23:29.140 ************************************ 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:29.140 * Looking for test storage... 00:23:29.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:29.140 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.141 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:31.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.045 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:31.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:31.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:31.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:23:31.045 00:23:31.045 --- 10.0.0.2 ping statistics --- 00:23:31.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.045 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:23:31.045 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:23:31.045 00:23:31.045 --- 10.0.0.1 ping statistics --- 00:23:31.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.045 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1785687 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1785687 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1785687 ']' 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.046 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:31.304 Malloc0 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:31.304 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:03.364 Fuzzing completed. Shutting down the fuzz application 00:24:03.364 00:24:03.364 Dumping successful admin opcodes: 00:24:03.364 8, 9, 10, 24, 00:24:03.364 Dumping successful io opcodes: 00:24:03.364 0, 9, 00:24:03.364 NS: 0x200003aeff00 I/O qp, Total commands completed: 444246, total successful commands: 2581, random_seed: 661191616 00:24:03.364 NS: 0x200003aeff00 admin qp, Total commands completed: 55808, total successful commands: 444, random_seed: 560554496 00:24:03.364 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:03.364 Fuzzing completed. Shutting down the fuzz application 00:24:03.364 00:24:03.364 Dumping successful admin opcodes: 00:24:03.364 24, 00:24:03.364 Dumping successful io opcodes: 00:24:03.364 00:24:03.364 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1502883656 00:24:03.364 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1503008712 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.364 rmmod nvme_tcp 00:24:03.364 rmmod nvme_fabrics 00:24:03.364 rmmod nvme_keyring 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1785687 ']' 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1785687 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1785687 ']' 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1785687 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1785687 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1785687' 00:24:03.364 killing process with pid 1785687 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1785687 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1785687 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.364 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:05.895 00:24:05.895 real 0m36.857s 00:24:05.895 user 0m50.531s 00:24:05.895 sys 0m15.666s 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:05.895 ************************************ 00:24:05.895 END TEST nvmf_fuzz 00:24:05.895 ************************************ 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.895 06:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:05.895 ************************************ 00:24:05.895 START TEST nvmf_multiconnection 00:24:05.896 ************************************ 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:05.896 * Looking for test storage... 00:24:05.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:05.896 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:07.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:07.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.798 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:07.799 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:07.799 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:24:07.799 00:24:07.799 --- 10.0.0.2 ping statistics --- 00:24:07.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.799 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:24:07.799 00:24:07.799 --- 10.0.0.1 ping statistics --- 00:24:07.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.799 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1791279 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1791279 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1791279 ']' 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.799 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.799 [2024-07-23 06:20:00.955178] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:24:07.799 [2024-07-23 06:20:00.955248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.799 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.799 [2024-07-23 06:20:00.994286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:07.799 [2024-07-23 06:20:01.020652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.799 [2024-07-23 06:20:01.112115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.799 [2024-07-23 06:20:01.112164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.799 [2024-07-23 06:20:01.112188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.799 [2024-07-23 06:20:01.112199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.799 [2024-07-23 06:20:01.112210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.799 [2024-07-23 06:20:01.112276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.799 [2024-07-23 06:20:01.112336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.799 [2024-07-23 06:20:01.112403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.799 [2024-07-23 06:20:01.112406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.058 [2024-07-23 06:20:01.270812] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.058 Malloc1 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.058 [2024-07-23 06:20:01.327120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.058 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.059 Malloc2 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.059 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 Malloc3 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 Malloc4 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 Malloc5 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 Malloc6 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.318 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 Malloc7 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 Malloc8 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.319 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.582 Malloc9 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.582 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 Malloc10 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 Malloc11 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.583 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:09.518 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:09.518 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:09.518 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:09.518 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:09.518 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:11.433 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:11.433 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:11.433 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:11.433 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:11.433 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:11.433 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:11.433 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.433 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:11.998 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:11.998 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:11.998 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:11.999 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:11.999 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:13.893 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:13.893 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:13.893 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:13.893 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:13.893 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:13.893 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:13.893 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.893 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:14.825 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:14.825 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:14.825 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:14.825 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:14.825 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:16.720 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:16.720 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:16.720 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:16.720 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:16.720 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:16.720 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:16.720 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.720 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:17.284 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:17.284 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:17.284 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:17.284 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:17.284 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:19.807 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:19.807 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:19.807 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:19.807 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:19.807 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:19.807 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:19.807 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.807 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:20.067 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:20.067 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:20.067 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:20.067 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:20.067 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:22.588 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:22.588 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:22.588 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:22.588 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:22.588 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:22.588 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:22.588 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.588 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:22.845 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:22.845 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:22.845 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.845 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:22.845 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:25.367 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:25.367 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:25.367 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:25.367 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:25.367 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.367 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:25.367 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.367 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:25.627 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:25.627 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:25.627 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.627 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:25.627 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:28.152 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:28.152 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:28.152 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:28.152 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:28.152 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.152 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:28.152 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.152 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:28.717 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:28.717 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:28.717 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:28.717 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:28.717 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:30.615 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:30.615 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:30.615 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:30.615 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:30.615 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:30.615 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:30.615 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.615 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:31.547 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:31.547 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:31.547 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:31.547 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:31.547 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:33.440 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:33.441 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:33.441 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:33.441 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:33.441 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:33.441 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:33.441 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.441 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:34.372 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:34.372 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:34.372 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:34.372 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:34.372 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:36.269 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:36.269 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:36.269 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:36.269 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:36.269 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.269 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:36.269 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.269 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:36.835 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:36.835 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:36.835 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.835 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:36.835 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:39.362 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:39.362 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:39.362 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:39.362 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:39.362 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:39.362 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:39.362 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:39.362 [global] 00:24:39.362 thread=1 00:24:39.362 invalidate=1 00:24:39.362 rw=read 00:24:39.362 time_based=1 00:24:39.362 runtime=10 00:24:39.362 ioengine=libaio 00:24:39.362 direct=1 00:24:39.362 bs=262144 00:24:39.362 iodepth=64 00:24:39.362 norandommap=1 00:24:39.362 numjobs=1 00:24:39.362 00:24:39.362 [job0] 00:24:39.362 filename=/dev/nvme0n1 00:24:39.362 [job1] 00:24:39.362 filename=/dev/nvme10n1 00:24:39.362 [job2] 00:24:39.362 filename=/dev/nvme1n1 00:24:39.362 [job3] 00:24:39.362 filename=/dev/nvme2n1 00:24:39.362 [job4] 00:24:39.362 filename=/dev/nvme3n1 00:24:39.362 [job5] 00:24:39.362 filename=/dev/nvme4n1 00:24:39.362 [job6] 00:24:39.362 filename=/dev/nvme5n1 00:24:39.362 [job7] 00:24:39.362 filename=/dev/nvme6n1 00:24:39.362 [job8] 00:24:39.362 filename=/dev/nvme7n1 00:24:39.362 [job9] 00:24:39.362 filename=/dev/nvme8n1 00:24:39.362 [job10] 00:24:39.362 filename=/dev/nvme9n1 00:24:39.362 Could not set queue depth (nvme0n1) 00:24:39.362 Could not set queue depth (nvme10n1) 00:24:39.362 Could not set queue depth (nvme1n1) 00:24:39.362 Could not set queue depth (nvme2n1) 00:24:39.362 Could not set queue depth (nvme3n1) 00:24:39.362 Could not set queue depth (nvme4n1) 00:24:39.362 Could not set queue depth (nvme5n1) 00:24:39.362 Could not set queue depth (nvme6n1) 00:24:39.362 Could not set queue depth (nvme7n1) 00:24:39.362 Could not set queue depth (nvme8n1) 00:24:39.362 Could not set queue depth (nvme9n1) 00:24:39.362 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:39.362 fio-3.35 00:24:39.362 Starting 11 threads 00:24:51.580 00:24:51.580 job0: (groupid=0, jobs=1): err= 0: pid=1795538: Tue Jul 23 06:20:42 2024 00:24:51.580 read: IOPS=309, BW=77.4MiB/s (81.2MB/s)(786MiB/10150msec) 00:24:51.580 slat (usec): min=14, max=239536, avg=2696.70, stdev=10258.54 00:24:51.580 clat (msec): min=80, max=606, avg=203.77, stdev=71.87 00:24:51.580 lat (msec): min=80, max=606, avg=206.46, stdev=72.63 00:24:51.580 clat percentiles (msec): 00:24:51.580 | 1.00th=[ 107], 5.00th=[ 136], 10.00th=[ 144], 20.00th=[ 159], 00:24:51.580 | 30.00th=[ 171], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 201], 00:24:51.580 | 70.00th=[ 209], 80.00th=[ 230], 90.00th=[ 262], 95.00th=[ 330], 00:24:51.580 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 592], 99.95th=[ 609], 00:24:51.580 | 99.99th=[ 609] 00:24:51.580 bw ( KiB/s): min=34372, max=106496, per=4.89%, avg=78836.80, stdev=20168.65, samples=20 00:24:51.580 iops : min= 134, max= 416, avg=307.90, stdev=78.83, samples=20 00:24:51.580 lat (msec) : 100=0.92%, 250=86.48%, 500=11.04%, 750=1.56% 00:24:51.580 cpu : usr=0.21%, sys=1.18%, ctx=702, majf=0, minf=4097 00:24:51.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:24:51.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.580 issued rwts: total=3144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.580 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.580 job1: (groupid=0, jobs=1): err= 0: pid=1795539: Tue Jul 23 06:20:42 2024 00:24:51.580 read: IOPS=827, BW=207MiB/s (217MB/s)(2092MiB/10108msec) 00:24:51.580 slat (usec): min=10, max=152170, avg=923.88, stdev=4128.92 00:24:51.580 clat (usec): min=1349, max=292343, avg=76346.80, stdev=54071.62 00:24:51.580 lat (usec): min=1396, max=292360, avg=77270.68, stdev=54443.90 00:24:51.580 clat percentiles (msec): 00:24:51.580 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 35], 00:24:51.580 | 30.00th=[ 40], 40.00th=[ 53], 50.00th=[ 63], 60.00th=[ 77], 00:24:51.580 | 70.00th=[ 88], 80.00th=[ 101], 90.00th=[ 167], 95.00th=[ 201], 00:24:51.580 | 99.00th=[ 257], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:24:51.580 | 99.99th=[ 292] 00:24:51.580 bw ( KiB/s): min=70656, max=490496, per=13.17%, avg=212528.70, stdev=108697.40, samples=20 00:24:51.580 iops : min= 276, max= 1916, avg=830.15, stdev=424.62, samples=20 00:24:51.580 lat (msec) : 2=0.42%, 4=1.14%, 10=1.76%, 20=2.37%, 50=31.77% 00:24:51.580 lat (msec) : 100=42.33%, 250=19.08%, 500=1.15% 00:24:51.580 cpu : usr=0.56%, sys=2.52%, ctx=1819, majf=0, minf=4097 00:24:51.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:51.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.580 issued rwts: total=8366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.580 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.580 job2: (groupid=0, jobs=1): err= 0: pid=1795540: Tue Jul 23 06:20:42 2024 00:24:51.580 read: IOPS=540, BW=135MiB/s (142MB/s)(1363MiB/10088msec) 00:24:51.580 slat (usec): min=10, max=250244, avg=1072.77, stdev=7231.49 00:24:51.580 clat (msec): min=2, max=340, avg=117.25, stdev=80.82 00:24:51.580 lat (msec): min=2, max=585, avg=118.32, stdev=81.71 00:24:51.580 clat percentiles (msec): 00:24:51.580 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 22], 20.00th=[ 42], 00:24:51.580 | 30.00th=[ 57], 40.00th=[ 70], 50.00th=[ 101], 60.00th=[ 131], 00:24:51.580 | 70.00th=[ 176], 80.00th=[ 194], 90.00th=[ 236], 95.00th=[ 262], 00:24:51.580 | 99.00th=[ 300], 99.50th=[ 305], 99.90th=[ 338], 99.95th=[ 338], 00:24:51.580 | 99.99th=[ 342] 00:24:51.580 bw ( KiB/s): min=65024, max=277504, per=8.55%, avg=137969.30, stdev=67192.27, samples=20 00:24:51.580 iops : min= 254, max= 1084, avg=538.85, stdev=262.51, samples=20 00:24:51.580 lat (msec) : 4=0.22%, 10=4.03%, 20=3.98%, 50=16.21%, 100=25.47% 00:24:51.580 lat (msec) : 250=42.69%, 500=7.39% 00:24:51.580 cpu : usr=0.36%, sys=1.75%, ctx=1373, majf=0, minf=4097 00:24:51.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:51.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.580 issued rwts: total=5453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.580 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.580 job3: (groupid=0, jobs=1): err= 0: pid=1795541: Tue Jul 23 06:20:42 2024 00:24:51.580 read: IOPS=732, BW=183MiB/s (192MB/s)(1852MiB/10118msec) 00:24:51.580 slat (usec): min=9, max=226227, avg=1093.56, stdev=5510.97 00:24:51.580 clat (usec): min=988, max=458766, avg=86265.08, stdev=61748.33 00:24:51.580 lat (usec): min=1010, max=458897, avg=87358.64, stdev=62563.84 00:24:51.580 clat percentiles (msec): 00:24:51.580 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 31], 20.00th=[ 38], 00:24:51.580 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 71], 60.00th=[ 85], 00:24:51.580 | 70.00th=[ 97], 80.00th=[ 122], 90.00th=[ 184], 95.00th=[ 222], 00:24:51.580 | 99.00th=[ 266], 99.50th=[ 347], 99.90th=[ 351], 99.95th=[ 351], 00:24:51.580 | 99.99th=[ 460] 00:24:51.580 bw ( KiB/s): min=42068, max=412672, per=11.65%, avg=187959.15, stdev=102443.80, samples=20 00:24:51.580 iops : min= 164, max= 1612, avg=734.15, stdev=400.10, samples=20 00:24:51.580 lat (usec) : 1000=0.01% 00:24:51.580 lat (msec) : 2=0.80%, 4=1.47%, 10=1.67%, 20=2.39%, 50=26.26% 00:24:51.580 lat (msec) : 100=39.57%, 250=26.20%, 500=1.62% 00:24:51.580 cpu : usr=0.35%, sys=2.36%, ctx=1666, majf=0, minf=4097 00:24:51.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:51.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.580 issued rwts: total=7407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.580 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.580 job4: (groupid=0, jobs=1): err= 0: pid=1795542: Tue Jul 23 06:20:42 2024 00:24:51.580 read: IOPS=331, BW=82.9MiB/s (86.9MB/s)(840MiB/10141msec) 00:24:51.580 slat (usec): min=14, max=163876, avg=2881.10, stdev=9906.84 00:24:51.580 clat (msec): min=80, max=604, avg=190.07, stdev=75.19 00:24:51.580 lat (msec): min=83, max=636, avg=192.95, stdev=76.54 00:24:51.580 clat percentiles (msec): 00:24:51.580 | 1.00th=[ 92], 5.00th=[ 107], 10.00th=[ 123], 20.00th=[ 140], 00:24:51.580 | 30.00th=[ 153], 40.00th=[ 169], 50.00th=[ 182], 60.00th=[ 190], 00:24:51.580 | 70.00th=[ 201], 80.00th=[ 222], 90.00th=[ 255], 95.00th=[ 296], 00:24:51.580 | 99.00th=[ 542], 99.50th=[ 567], 99.90th=[ 609], 99.95th=[ 609], 00:24:51.580 | 99.99th=[ 609] 00:24:51.580 bw ( KiB/s): min=30208, max=128512, per=5.23%, avg=84387.30, stdev=25457.44, samples=20 00:24:51.580 iops : min= 118, max= 502, avg=329.60, stdev=99.45, samples=20 00:24:51.580 lat (msec) : 100=2.74%, 250=85.93%, 500=9.73%, 750=1.61% 00:24:51.580 cpu : usr=0.28%, sys=1.18%, ctx=632, majf=0, minf=3724 00:24:51.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:24:51.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.580 issued rwts: total=3361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.580 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.580 job5: (groupid=0, jobs=1): err= 0: pid=1795543: Tue Jul 23 06:20:42 2024 00:24:51.581 read: IOPS=392, BW=98.1MiB/s (103MB/s)(996MiB/10148msec) 00:24:51.581 slat (usec): min=9, max=271924, avg=2156.88, stdev=8817.40 00:24:51.581 clat (usec): min=1049, max=601754, avg=160750.71, stdev=103905.58 00:24:51.581 lat (usec): min=1071, max=645847, avg=162907.59, stdev=105483.71 00:24:51.581 clat percentiles (msec): 00:24:51.581 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 33], 20.00th=[ 63], 00:24:51.581 | 30.00th=[ 106], 40.00th=[ 146], 50.00th=[ 167], 60.00th=[ 186], 00:24:51.581 | 70.00th=[ 203], 80.00th=[ 220], 90.00th=[ 249], 95.00th=[ 342], 00:24:51.581 | 99.00th=[ 510], 99.50th=[ 550], 99.90th=[ 592], 99.95th=[ 592], 00:24:51.581 | 99.99th=[ 600] 00:24:51.581 bw ( KiB/s): min=30208, max=251912, per=6.22%, avg=100319.95, stdev=54828.61, samples=20 00:24:51.581 iops : min= 118, max= 984, avg=391.85, stdev=214.18, samples=20 00:24:51.581 lat (msec) : 2=0.05%, 4=1.78%, 10=3.66%, 20=1.58%, 50=9.06% 00:24:51.581 lat (msec) : 100=13.43%, 250=60.52%, 500=8.51%, 750=1.41% 00:24:51.581 cpu : usr=0.28%, sys=1.41%, ctx=978, majf=0, minf=4097 00:24:51.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:51.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.581 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.581 job6: (groupid=0, jobs=1): err= 0: pid=1795544: Tue Jul 23 06:20:42 2024 00:24:51.581 read: IOPS=445, BW=111MiB/s (117MB/s)(1127MiB/10121msec) 00:24:51.581 slat (usec): min=14, max=216955, avg=1812.17, stdev=8170.88 00:24:51.581 clat (usec): min=1767, max=354568, avg=141822.92, stdev=66654.40 00:24:51.581 lat (usec): min=1785, max=397807, avg=143635.09, stdev=67871.48 00:24:51.581 clat percentiles (msec): 00:24:51.581 | 1.00th=[ 5], 5.00th=[ 43], 10.00th=[ 68], 20.00th=[ 78], 00:24:51.581 | 30.00th=[ 90], 40.00th=[ 106], 50.00th=[ 148], 60.00th=[ 171], 00:24:51.581 | 70.00th=[ 190], 80.00th=[ 205], 90.00th=[ 224], 95.00th=[ 247], 00:24:51.581 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 313], 00:24:51.581 | 99.99th=[ 355] 00:24:51.581 bw ( KiB/s): min=64512, max=211968, per=7.05%, avg=113693.50, stdev=43088.20, samples=20 00:24:51.581 iops : min= 252, max= 828, avg=444.05, stdev=168.30, samples=20 00:24:51.581 lat (msec) : 2=0.07%, 4=0.62%, 10=1.07%, 20=0.60%, 50=3.73% 00:24:51.581 lat (msec) : 100=30.83%, 250=58.61%, 500=4.48% 00:24:51.581 cpu : usr=0.25%, sys=1.64%, ctx=1051, majf=0, minf=4097 00:24:51.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:51.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.581 issued rwts: total=4506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.581 job7: (groupid=0, jobs=1): err= 0: pid=1795545: Tue Jul 23 06:20:42 2024 00:24:51.581 read: IOPS=873, BW=218MiB/s (229MB/s)(2207MiB/10110msec) 00:24:51.581 slat (usec): min=9, max=119426, avg=756.64, stdev=3672.13 00:24:51.581 clat (usec): min=1283, max=282422, avg=72481.98, stdev=51944.82 00:24:51.581 lat (usec): min=1298, max=313418, avg=73238.62, stdev=52469.73 00:24:51.581 clat percentiles (msec): 00:24:51.581 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 26], 20.00th=[ 34], 00:24:51.581 | 30.00th=[ 38], 40.00th=[ 44], 50.00th=[ 55], 60.00th=[ 70], 00:24:51.581 | 70.00th=[ 89], 80.00th=[ 107], 90.00th=[ 155], 95.00th=[ 186], 00:24:51.581 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 264], 99.95th=[ 271], 00:24:51.581 | 99.99th=[ 284] 00:24:51.581 bw ( KiB/s): min=83968, max=486912, per=13.90%, avg=224354.05, stdev=104418.42, samples=20 00:24:51.581 iops : min= 328, max= 1902, avg=876.35, stdev=407.90, samples=20 00:24:51.581 lat (msec) : 2=0.05%, 4=1.18%, 10=2.72%, 20=3.34%, 50=38.49% 00:24:51.581 lat (msec) : 100=31.22%, 250=22.82%, 500=0.19% 00:24:51.581 cpu : usr=0.52%, sys=2.73%, ctx=2023, majf=0, minf=4097 00:24:51.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:51.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.581 issued rwts: total=8829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.581 job8: (groupid=0, jobs=1): err= 0: pid=1795546: Tue Jul 23 06:20:42 2024 00:24:51.581 read: IOPS=441, BW=110MiB/s (116MB/s)(1120MiB/10148msec) 00:24:51.581 slat (usec): min=9, max=171169, avg=1585.27, stdev=6858.99 00:24:51.581 clat (msec): min=4, max=633, avg=143.24, stdev=91.37 00:24:51.581 lat (msec): min=4, max=633, avg=144.83, stdev=91.84 00:24:51.581 clat percentiles (msec): 00:24:51.581 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 44], 20.00th=[ 66], 00:24:51.581 | 30.00th=[ 92], 40.00th=[ 114], 50.00th=[ 140], 60.00th=[ 159], 00:24:51.581 | 70.00th=[ 176], 80.00th=[ 197], 90.00th=[ 245], 95.00th=[ 279], 00:24:51.581 | 99.00th=[ 600], 99.50th=[ 617], 99.90th=[ 634], 99.95th=[ 634], 00:24:51.581 | 99.99th=[ 634] 00:24:51.581 bw ( KiB/s): min=11264, max=229888, per=7.01%, avg=113059.40, stdev=49586.18, samples=20 00:24:51.581 iops : min= 44, max= 898, avg=441.60, stdev=193.72, samples=20 00:24:51.581 lat (msec) : 10=0.27%, 20=1.87%, 50=12.25%, 100=20.80%, 250=55.81% 00:24:51.581 lat (msec) : 500=7.57%, 750=1.43% 00:24:51.581 cpu : usr=0.31%, sys=1.48%, ctx=1019, majf=0, minf=4097 00:24:51.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:51.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.581 issued rwts: total=4481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.581 job9: (groupid=0, jobs=1): err= 0: pid=1795547: Tue Jul 23 06:20:42 2024 00:24:51.581 read: IOPS=745, BW=186MiB/s (195MB/s)(1890MiB/10147msec) 00:24:51.581 slat (usec): min=13, max=325873, avg=1180.16, stdev=7500.34 00:24:51.581 clat (msec): min=3, max=661, avg=84.64, stdev=79.82 00:24:51.581 lat (msec): min=3, max=694, avg=85.82, stdev=80.62 00:24:51.581 clat percentiles (msec): 00:24:51.581 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 35], 00:24:51.581 | 30.00th=[ 39], 40.00th=[ 48], 50.00th=[ 59], 60.00th=[ 74], 00:24:51.581 | 70.00th=[ 92], 80.00th=[ 108], 90.00th=[ 171], 95.00th=[ 228], 00:24:51.581 | 99.00th=[ 447], 99.50th=[ 527], 99.90th=[ 600], 99.95th=[ 600], 00:24:51.581 | 99.99th=[ 659] 00:24:51.581 bw ( KiB/s): min=22528, max=459264, per=11.89%, avg=191876.95, stdev=111548.17, samples=20 00:24:51.581 iops : min= 88, max= 1794, avg=749.45, stdev=435.71, samples=20 00:24:51.581 lat (msec) : 4=0.01%, 10=0.87%, 20=2.90%, 50=37.75%, 100=33.49% 00:24:51.581 lat (msec) : 250=20.67%, 500=3.73%, 750=0.58% 00:24:51.581 cpu : usr=0.40%, sys=2.49%, ctx=1552, majf=0, minf=4097 00:24:51.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:51.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.581 issued rwts: total=7561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.581 job10: (groupid=0, jobs=1): err= 0: pid=1795548: Tue Jul 23 06:20:42 2024 00:24:51.581 read: IOPS=687, BW=172MiB/s (180MB/s)(1722MiB/10017msec) 00:24:51.581 slat (usec): min=9, max=148928, avg=817.70, stdev=4396.98 00:24:51.581 clat (usec): min=1221, max=530897, avg=92193.76, stdev=73724.98 00:24:51.581 lat (usec): min=1246, max=533916, avg=93011.46, stdev=74232.15 00:24:51.581 clat percentiles (msec): 00:24:51.581 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 13], 20.00th=[ 30], 00:24:51.581 | 30.00th=[ 38], 40.00th=[ 47], 50.00th=[ 70], 60.00th=[ 96], 00:24:51.581 | 70.00th=[ 134], 80.00th=[ 163], 90.00th=[ 199], 95.00th=[ 228], 00:24:51.581 | 99.00th=[ 271], 99.50th=[ 309], 99.90th=[ 498], 99.95th=[ 502], 00:24:51.581 | 99.99th=[ 531] 00:24:51.581 bw ( KiB/s): min=76288, max=435200, per=10.83%, avg=174692.25, stdev=94998.99, samples=20 00:24:51.581 iops : min= 298, max= 1700, avg=682.35, stdev=371.07, samples=20 00:24:51.581 lat (msec) : 2=0.74%, 4=1.28%, 10=5.76%, 20=6.45%, 50=27.24% 00:24:51.581 lat (msec) : 100=20.67%, 250=36.09%, 500=1.73%, 750=0.04% 00:24:51.581 cpu : usr=0.29%, sys=2.04%, ctx=1906, majf=0, minf=4097 00:24:51.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:51.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.581 issued rwts: total=6888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.581 00:24:51.581 Run status group 0 (all jobs): 00:24:51.581 READ: bw=1576MiB/s (1652MB/s), 77.4MiB/s-218MiB/s (81.2MB/s-229MB/s), io=15.6GiB (16.8GB), run=10017-10150msec 00:24:51.581 00:24:51.581 Disk stats (read/write): 00:24:51.581 nvme0n1: ios=6153/0, merge=0/0, ticks=1221068/0, in_queue=1221068, util=97.20% 00:24:51.581 nvme10n1: ios=16553/0, merge=0/0, ticks=1236305/0, in_queue=1236305, util=97.42% 00:24:51.581 nvme1n1: ios=10631/0, merge=0/0, ticks=1242187/0, in_queue=1242187, util=97.70% 00:24:51.581 nvme2n1: ios=14639/0, merge=0/0, ticks=1237543/0, in_queue=1237543, util=97.83% 00:24:51.581 nvme3n1: ios=6575/0, merge=0/0, ticks=1223528/0, in_queue=1223528, util=97.93% 00:24:51.581 nvme4n1: ios=7813/0, merge=0/0, ticks=1213654/0, in_queue=1213654, util=98.26% 00:24:51.581 nvme5n1: ios=8812/0, merge=0/0, ticks=1231794/0, in_queue=1231794, util=98.41% 00:24:51.581 nvme6n1: ios=17479/0, merge=0/0, ticks=1240930/0, in_queue=1240930, util=98.51% 00:24:51.581 nvme7n1: ios=8817/0, merge=0/0, ticks=1230334/0, in_queue=1230334, util=98.92% 00:24:51.581 nvme8n1: ios=15000/0, merge=0/0, ticks=1219329/0, in_queue=1219329, util=99.09% 00:24:51.581 nvme9n1: ios=13487/0, merge=0/0, ticks=1246096/0, in_queue=1246096, util=99.20% 00:24:51.581 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:51.581 [global] 00:24:51.581 thread=1 00:24:51.581 invalidate=1 00:24:51.581 rw=randwrite 00:24:51.581 time_based=1 00:24:51.581 runtime=10 00:24:51.581 ioengine=libaio 00:24:51.581 direct=1 00:24:51.581 bs=262144 00:24:51.582 iodepth=64 00:24:51.582 norandommap=1 00:24:51.582 numjobs=1 00:24:51.582 00:24:51.582 [job0] 00:24:51.582 filename=/dev/nvme0n1 00:24:51.582 [job1] 00:24:51.582 filename=/dev/nvme10n1 00:24:51.582 [job2] 00:24:51.582 filename=/dev/nvme1n1 00:24:51.582 [job3] 00:24:51.582 filename=/dev/nvme2n1 00:24:51.582 [job4] 00:24:51.582 filename=/dev/nvme3n1 00:24:51.582 [job5] 00:24:51.582 filename=/dev/nvme4n1 00:24:51.582 [job6] 00:24:51.582 filename=/dev/nvme5n1 00:24:51.582 [job7] 00:24:51.582 filename=/dev/nvme6n1 00:24:51.582 [job8] 00:24:51.582 filename=/dev/nvme7n1 00:24:51.582 [job9] 00:24:51.582 filename=/dev/nvme8n1 00:24:51.582 [job10] 00:24:51.582 filename=/dev/nvme9n1 00:24:51.582 Could not set queue depth (nvme0n1) 00:24:51.582 Could not set queue depth (nvme10n1) 00:24:51.582 Could not set queue depth (nvme1n1) 00:24:51.582 Could not set queue depth (nvme2n1) 00:24:51.582 Could not set queue depth (nvme3n1) 00:24:51.582 Could not set queue depth (nvme4n1) 00:24:51.582 Could not set queue depth (nvme5n1) 00:24:51.582 Could not set queue depth (nvme6n1) 00:24:51.582 Could not set queue depth (nvme7n1) 00:24:51.582 Could not set queue depth (nvme8n1) 00:24:51.582 Could not set queue depth (nvme9n1) 00:24:51.582 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.582 fio-3.35 00:24:51.582 Starting 11 threads 00:25:01.555 00:25:01.555 job0: (groupid=0, jobs=1): err= 0: pid=1796566: Tue Jul 23 06:20:53 2024 00:25:01.555 write: IOPS=498, BW=125MiB/s (131MB/s)(1259MiB/10100msec); 0 zone resets 00:25:01.555 slat (usec): min=24, max=89490, avg=1643.07, stdev=4073.86 00:25:01.555 clat (msec): min=3, max=336, avg=126.65, stdev=57.18 00:25:01.555 lat (msec): min=3, max=336, avg=128.30, stdev=57.79 00:25:01.555 clat percentiles (msec): 00:25:01.555 | 1.00th=[ 26], 5.00th=[ 66], 10.00th=[ 75], 20.00th=[ 80], 00:25:01.555 | 30.00th=[ 86], 40.00th=[ 100], 50.00th=[ 113], 60.00th=[ 131], 00:25:01.555 | 70.00th=[ 150], 80.00th=[ 167], 90.00th=[ 197], 95.00th=[ 253], 00:25:01.555 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 326], 99.95th=[ 330], 00:25:01.555 | 99.99th=[ 338] 00:25:01.555 bw ( KiB/s): min=47104, max=210944, per=10.23%, avg=127247.85, stdev=42127.31, samples=20 00:25:01.555 iops : min= 184, max= 824, avg=497.05, stdev=164.57, samples=20 00:25:01.555 lat (msec) : 4=0.04%, 10=0.10%, 20=0.30%, 50=3.00%, 100=37.35% 00:25:01.555 lat (msec) : 250=53.97%, 500=5.24% 00:25:01.555 cpu : usr=1.51%, sys=1.63%, ctx=2016, majf=0, minf=1 00:25:01.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:01.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.555 issued rwts: total=0,5034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.555 job1: (groupid=0, jobs=1): err= 0: pid=1796578: Tue Jul 23 06:20:53 2024 00:25:01.555 write: IOPS=479, BW=120MiB/s (126MB/s)(1221MiB/10185msec); 0 zone resets 00:25:01.555 slat (usec): min=23, max=46281, avg=1410.72, stdev=3823.04 00:25:01.555 clat (msec): min=3, max=433, avg=131.93, stdev=65.72 00:25:01.555 lat (msec): min=3, max=433, avg=133.34, stdev=66.48 00:25:01.555 clat percentiles (msec): 00:25:01.555 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 57], 20.00th=[ 78], 00:25:01.555 | 30.00th=[ 94], 40.00th=[ 106], 50.00th=[ 124], 60.00th=[ 136], 00:25:01.555 | 70.00th=[ 155], 80.00th=[ 194], 90.00th=[ 224], 95.00th=[ 241], 00:25:01.555 | 99.00th=[ 309], 99.50th=[ 363], 99.90th=[ 418], 99.95th=[ 426], 00:25:01.555 | 99.99th=[ 435] 00:25:01.555 bw ( KiB/s): min=61440, max=209408, per=9.92%, avg=123404.70, stdev=38321.44, samples=20 00:25:01.555 iops : min= 240, max= 818, avg=482.00, stdev=149.69, samples=20 00:25:01.555 lat (msec) : 4=0.02%, 10=0.18%, 20=1.13%, 50=6.45%, 100=27.99% 00:25:01.555 lat (msec) : 250=60.42%, 500=3.81% 00:25:01.555 cpu : usr=1.43%, sys=1.52%, ctx=2686, majf=0, minf=1 00:25:01.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:01.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.555 issued rwts: total=0,4884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.555 job2: (groupid=0, jobs=1): err= 0: pid=1796579: Tue Jul 23 06:20:53 2024 00:25:01.555 write: IOPS=393, BW=98.5MiB/s (103MB/s)(1004MiB/10196msec); 0 zone resets 00:25:01.555 slat (usec): min=19, max=104219, avg=1811.30, stdev=5314.78 00:25:01.555 clat (msec): min=2, max=478, avg=160.50, stdev=75.09 00:25:01.555 lat (msec): min=2, max=478, avg=162.31, stdev=76.13 00:25:01.555 clat percentiles (msec): 00:25:01.555 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 44], 20.00th=[ 100], 00:25:01.555 | 30.00th=[ 129], 40.00th=[ 148], 50.00th=[ 165], 60.00th=[ 184], 00:25:01.555 | 70.00th=[ 201], 80.00th=[ 222], 90.00th=[ 253], 95.00th=[ 288], 00:25:01.555 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 468], 99.95th=[ 472], 00:25:01.555 | 99.99th=[ 481] 00:25:01.555 bw ( KiB/s): min=55296, max=161280, per=8.14%, avg=101219.30, stdev=27264.60, samples=20 00:25:01.555 iops : min= 216, max= 630, avg=395.35, stdev=106.54, samples=20 00:25:01.555 lat (msec) : 4=0.07%, 10=0.60%, 20=2.54%, 50=8.34%, 100=8.64% 00:25:01.555 lat (msec) : 250=69.45%, 500=10.36% 00:25:01.555 cpu : usr=1.27%, sys=1.07%, ctx=2218, majf=0, minf=1 00:25:01.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:01.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.555 issued rwts: total=0,4017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.555 job3: (groupid=0, jobs=1): err= 0: pid=1796580: Tue Jul 23 06:20:53 2024 00:25:01.555 write: IOPS=548, BW=137MiB/s (144MB/s)(1385MiB/10101msec); 0 zone resets 00:25:01.555 slat (usec): min=21, max=121043, avg=1491.27, stdev=4338.94 00:25:01.555 clat (usec): min=1359, max=334492, avg=115132.39, stdev=56774.50 00:25:01.555 lat (usec): min=1404, max=334542, avg=116623.66, stdev=57261.07 00:25:01.555 clat percentiles (msec): 00:25:01.555 | 1.00th=[ 9], 5.00th=[ 37], 10.00th=[ 61], 20.00th=[ 73], 00:25:01.555 | 30.00th=[ 80], 40.00th=[ 89], 50.00th=[ 105], 60.00th=[ 118], 00:25:01.555 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 192], 95.00th=[ 234], 00:25:01.555 | 99.00th=[ 292], 99.50th=[ 313], 99.90th=[ 330], 99.95th=[ 330], 00:25:01.555 | 99.99th=[ 334] 00:25:01.555 bw ( KiB/s): min=65536, max=230912, per=11.27%, avg=140175.95, stdev=44151.85, samples=20 00:25:01.555 iops : min= 256, max= 902, avg=547.55, stdev=172.48, samples=20 00:25:01.555 lat (msec) : 2=0.07%, 4=0.25%, 10=0.87%, 20=1.16%, 50=4.51% 00:25:01.555 lat (msec) : 100=40.15%, 250=50.53%, 500=2.46% 00:25:01.555 cpu : usr=1.48%, sys=1.91%, ctx=2370, majf=0, minf=1 00:25:01.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:01.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.555 issued rwts: total=0,5539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.555 job4: (groupid=0, jobs=1): err= 0: pid=1796581: Tue Jul 23 06:20:53 2024 00:25:01.555 write: IOPS=361, BW=90.4MiB/s (94.8MB/s)(913MiB/10100msec); 0 zone resets 00:25:01.555 slat (usec): min=23, max=190404, avg=2120.50, stdev=8428.13 00:25:01.555 clat (msec): min=6, max=774, avg=174.70, stdev=110.21 00:25:01.555 lat (msec): min=9, max=774, avg=176.82, stdev=111.76 00:25:01.555 clat percentiles (msec): 00:25:01.555 | 1.00th=[ 18], 5.00th=[ 51], 10.00th=[ 69], 20.00th=[ 91], 00:25:01.555 | 30.00th=[ 115], 40.00th=[ 136], 50.00th=[ 157], 60.00th=[ 182], 00:25:01.555 | 70.00th=[ 209], 80.00th=[ 228], 90.00th=[ 284], 95.00th=[ 372], 00:25:01.555 | 99.00th=[ 625], 99.50th=[ 726], 99.90th=[ 768], 99.95th=[ 776], 00:25:01.555 | 99.99th=[ 776] 00:25:01.555 bw ( KiB/s): min=13824, max=175104, per=7.39%, avg=91868.20, stdev=42785.35, samples=20 00:25:01.556 iops : min= 54, max= 684, avg=358.85, stdev=167.13, samples=20 00:25:01.556 lat (msec) : 10=0.08%, 20=1.20%, 50=3.70%, 100=19.00%, 250=62.71% 00:25:01.556 lat (msec) : 500=10.71%, 750=2.33%, 1000=0.27% 00:25:01.556 cpu : usr=1.04%, sys=1.13%, ctx=1899, majf=0, minf=1 00:25:01.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:01.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.556 issued rwts: total=0,3652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.556 job5: (groupid=0, jobs=1): err= 0: pid=1796582: Tue Jul 23 06:20:53 2024 00:25:01.556 write: IOPS=375, BW=93.9MiB/s (98.5MB/s)(955MiB/10165msec); 0 zone resets 00:25:01.556 slat (usec): min=19, max=109741, avg=1919.32, stdev=5087.63 00:25:01.556 clat (msec): min=5, max=433, avg=168.36, stdev=74.63 00:25:01.556 lat (msec): min=5, max=433, avg=170.28, stdev=75.50 00:25:01.556 clat percentiles (msec): 00:25:01.556 | 1.00th=[ 25], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 94], 00:25:01.556 | 30.00th=[ 140], 40.00th=[ 155], 50.00th=[ 178], 60.00th=[ 194], 00:25:01.556 | 70.00th=[ 211], 80.00th=[ 228], 90.00th=[ 259], 95.00th=[ 271], 00:25:01.556 | 99.00th=[ 351], 99.50th=[ 384], 99.90th=[ 418], 99.95th=[ 435], 00:25:01.556 | 99.99th=[ 435] 00:25:01.556 bw ( KiB/s): min=55296, max=148992, per=7.73%, avg=96144.75, stdev=27782.79, samples=20 00:25:01.556 iops : min= 216, max= 582, avg=375.55, stdev=108.53, samples=20 00:25:01.556 lat (msec) : 10=0.16%, 20=0.37%, 50=6.68%, 100=13.62%, 250=66.33% 00:25:01.556 lat (msec) : 500=12.86% 00:25:01.556 cpu : usr=1.20%, sys=1.13%, ctx=1977, majf=0, minf=1 00:25:01.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:01.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.556 issued rwts: total=0,3819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.556 job6: (groupid=0, jobs=1): err= 0: pid=1796583: Tue Jul 23 06:20:53 2024 00:25:01.556 write: IOPS=413, BW=103MiB/s (108MB/s)(1045MiB/10107msec); 0 zone resets 00:25:01.556 slat (usec): min=15, max=94968, avg=1657.95, stdev=4615.06 00:25:01.556 clat (msec): min=4, max=374, avg=153.03, stdev=64.88 00:25:01.556 lat (msec): min=4, max=374, avg=154.69, stdev=65.85 00:25:01.556 clat percentiles (msec): 00:25:01.556 | 1.00th=[ 19], 5.00th=[ 36], 10.00th=[ 65], 20.00th=[ 92], 00:25:01.556 | 30.00th=[ 109], 40.00th=[ 142], 50.00th=[ 157], 60.00th=[ 180], 00:25:01.556 | 70.00th=[ 197], 80.00th=[ 211], 90.00th=[ 232], 95.00th=[ 259], 00:25:01.556 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 288], 99.95th=[ 292], 00:25:01.556 | 99.99th=[ 376] 00:25:01.556 bw ( KiB/s): min=65536, max=195584, per=8.47%, avg=105383.00, stdev=34398.61, samples=20 00:25:01.556 iops : min= 256, max= 764, avg=411.65, stdev=134.37, samples=20 00:25:01.556 lat (msec) : 10=0.24%, 20=1.12%, 50=6.17%, 100=17.32%, 250=68.54% 00:25:01.556 lat (msec) : 500=6.60% 00:25:01.556 cpu : usr=1.12%, sys=1.21%, ctx=2325, majf=0, minf=1 00:25:01.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:01.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.556 issued rwts: total=0,4180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.556 job7: (groupid=0, jobs=1): err= 0: pid=1796584: Tue Jul 23 06:20:53 2024 00:25:01.556 write: IOPS=385, BW=96.4MiB/s (101MB/s)(980MiB/10166msec); 0 zone resets 00:25:01.556 slat (usec): min=24, max=147369, avg=1720.34, stdev=5960.88 00:25:01.556 clat (msec): min=4, max=448, avg=164.15, stdev=77.73 00:25:01.556 lat (msec): min=4, max=448, avg=165.87, stdev=78.74 00:25:01.556 clat percentiles (msec): 00:25:01.556 | 1.00th=[ 17], 5.00th=[ 51], 10.00th=[ 71], 20.00th=[ 92], 00:25:01.556 | 30.00th=[ 114], 40.00th=[ 140], 50.00th=[ 153], 60.00th=[ 180], 00:25:01.556 | 70.00th=[ 211], 80.00th=[ 230], 90.00th=[ 271], 95.00th=[ 296], 00:25:01.556 | 99.00th=[ 359], 99.50th=[ 376], 99.90th=[ 393], 99.95th=[ 447], 00:25:01.556 | 99.99th=[ 447] 00:25:01.556 bw ( KiB/s): min=51200, max=157184, per=7.93%, avg=98700.85, stdev=31119.69, samples=20 00:25:01.556 iops : min= 200, max= 614, avg=385.55, stdev=121.56, samples=20 00:25:01.556 lat (msec) : 10=0.10%, 20=1.81%, 50=3.06%, 100=19.37%, 250=60.40% 00:25:01.556 lat (msec) : 500=15.26% 00:25:01.556 cpu : usr=1.02%, sys=1.38%, ctx=2274, majf=0, minf=1 00:25:01.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:01.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.556 issued rwts: total=0,3919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.556 job8: (groupid=0, jobs=1): err= 0: pid=1796587: Tue Jul 23 06:20:53 2024 00:25:01.556 write: IOPS=410, BW=103MiB/s (108MB/s)(1045MiB/10181msec); 0 zone resets 00:25:01.556 slat (usec): min=16, max=78823, avg=1731.49, stdev=4620.91 00:25:01.556 clat (msec): min=3, max=388, avg=153.98, stdev=73.20 00:25:01.556 lat (msec): min=3, max=388, avg=155.71, stdev=74.20 00:25:01.556 clat percentiles (msec): 00:25:01.556 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 53], 20.00th=[ 87], 00:25:01.556 | 30.00th=[ 123], 40.00th=[ 142], 50.00th=[ 157], 60.00th=[ 174], 00:25:01.556 | 70.00th=[ 192], 80.00th=[ 211], 90.00th=[ 243], 95.00th=[ 275], 00:25:01.556 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 376], 99.95th=[ 376], 00:25:01.556 | 99.99th=[ 388] 00:25:01.556 bw ( KiB/s): min=58484, max=185856, per=8.47%, avg=105417.15, stdev=33517.43, samples=20 00:25:01.556 iops : min= 228, max= 726, avg=411.75, stdev=130.97, samples=20 00:25:01.556 lat (msec) : 4=0.10%, 10=1.51%, 20=2.22%, 50=5.64%, 100=13.90% 00:25:01.556 lat (msec) : 250=67.57%, 500=9.06% 00:25:01.556 cpu : usr=1.07%, sys=1.20%, ctx=2313, majf=0, minf=1 00:25:01.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:01.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.556 issued rwts: total=0,4181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.556 job9: (groupid=0, jobs=1): err= 0: pid=1796588: Tue Jul 23 06:20:53 2024 00:25:01.556 write: IOPS=562, BW=141MiB/s (147MB/s)(1431MiB/10183msec); 0 zone resets 00:25:01.556 slat (usec): min=16, max=45467, avg=1319.41, stdev=3466.02 00:25:01.556 clat (msec): min=2, max=401, avg=112.42, stdev=70.69 00:25:01.556 lat (msec): min=2, max=401, avg=113.74, stdev=71.41 00:25:01.556 clat percentiles (msec): 00:25:01.556 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 53], 20.00th=[ 63], 00:25:01.556 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 83], 60.00th=[ 111], 00:25:01.556 | 70.00th=[ 134], 80.00th=[ 165], 90.00th=[ 220], 95.00th=[ 253], 00:25:01.556 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:01.556 | 99.99th=[ 401] 00:25:01.556 bw ( KiB/s): min=68096, max=247296, per=11.65%, avg=144897.20, stdev=61448.99, samples=20 00:25:01.556 iops : min= 266, max= 966, avg=566.00, stdev=240.03, samples=20 00:25:01.556 lat (msec) : 4=0.17%, 10=1.00%, 20=1.21%, 50=7.09%, 100=47.34% 00:25:01.556 lat (msec) : 250=37.56%, 500=5.63% 00:25:01.556 cpu : usr=1.74%, sys=1.68%, ctx=2646, majf=0, minf=1 00:25:01.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:01.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.556 issued rwts: total=0,5724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.556 job10: (groupid=0, jobs=1): err= 0: pid=1796589: Tue Jul 23 06:20:53 2024 00:25:01.556 write: IOPS=450, BW=113MiB/s (118MB/s)(1148MiB/10189msec); 0 zone resets 00:25:01.556 slat (usec): min=24, max=114250, avg=1629.57, stdev=5166.12 00:25:01.556 clat (msec): min=5, max=419, avg=139.99, stdev=71.84 00:25:01.556 lat (msec): min=5, max=419, avg=141.62, stdev=72.51 00:25:01.556 clat percentiles (msec): 00:25:01.556 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 63], 20.00th=[ 87], 00:25:01.556 | 30.00th=[ 99], 40.00th=[ 109], 50.00th=[ 131], 60.00th=[ 142], 00:25:01.556 | 70.00th=[ 157], 80.00th=[ 203], 90.00th=[ 245], 95.00th=[ 271], 00:25:01.556 | 99.00th=[ 351], 99.50th=[ 372], 99.90th=[ 393], 99.95th=[ 422], 00:25:01.556 | 99.99th=[ 422] 00:25:01.556 bw ( KiB/s): min=56320, max=174080, per=9.32%, avg=115929.55, stdev=38806.65, samples=20 00:25:01.556 iops : min= 220, max= 680, avg=452.80, stdev=151.57, samples=20 00:25:01.556 lat (msec) : 10=0.37%, 20=2.00%, 50=5.55%, 100=24.56%, 250=58.79% 00:25:01.556 lat (msec) : 500=8.73% 00:25:01.556 cpu : usr=1.35%, sys=1.45%, ctx=2117, majf=0, minf=1 00:25:01.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:01.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:01.556 issued rwts: total=0,4593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:01.556 00:25:01.556 Run status group 0 (all jobs): 00:25:01.556 WRITE: bw=1215MiB/s (1274MB/s), 90.4MiB/s-141MiB/s (94.8MB/s-147MB/s), io=12.1GiB (13.0GB), run=10100-10196msec 00:25:01.556 00:25:01.556 Disk stats (read/write): 00:25:01.556 nvme0n1: ios=53/9868, merge=0/0, ticks=2125/1214236, in_queue=1216361, util=99.86% 00:25:01.556 nvme10n1: ios=41/9758, merge=0/0, ticks=812/1247101, in_queue=1247913, util=100.00% 00:25:01.556 nvme1n1: ios=50/7995, merge=0/0, ticks=540/1241809, in_queue=1242349, util=100.00% 00:25:01.556 nvme2n1: ios=41/10874, merge=0/0, ticks=1987/1205425, in_queue=1207412, util=100.00% 00:25:01.556 nvme3n1: ios=46/7103, merge=0/0, ticks=2743/1192355, in_queue=1195098, util=100.00% 00:25:01.556 nvme4n1: ios=0/7473, merge=0/0, ticks=0/1213095, in_queue=1213095, util=98.11% 00:25:01.556 nvme5n1: ios=0/8102, merge=0/0, ticks=0/1222930, in_queue=1222930, util=98.27% 00:25:01.556 nvme6n1: ios=44/7671, merge=0/0, ticks=2678/1210578, in_queue=1213256, util=100.00% 00:25:01.557 nvme7n1: ios=44/8356, merge=0/0, ticks=862/1245056, in_queue=1245918, util=100.00% 00:25:01.557 nvme8n1: ios=41/11441, merge=0/0, ticks=856/1239987, in_queue=1240843, util=100.00% 00:25:01.557 nvme9n1: ios=47/9167, merge=0/0, ticks=4122/1214359, in_queue=1218481, util=100.00% 00:25:01.557 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:01.557 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:01.557 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.557 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:01.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:01.557 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:01.557 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.557 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:01.815 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.815 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:02.380 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:02.380 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.380 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:02.638 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.638 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:02.898 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:02.898 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.898 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:03.157 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.157 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:03.158 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.158 rmmod nvme_tcp 00:25:03.158 rmmod nvme_fabrics 00:25:03.158 rmmod nvme_keyring 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1791279 ']' 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1791279 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1791279 ']' 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1791279 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.158 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1791279 00:25:03.417 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:03.417 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:03.417 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1791279' 00:25:03.417 killing process with pid 1791279 00:25:03.417 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1791279 00:25:03.417 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1791279 00:25:03.987 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.987 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.987 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.987 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.987 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.987 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.987 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.987 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.892 00:25:05.892 real 1m0.294s 00:25:05.892 user 3m13.150s 00:25:05.892 sys 0m25.157s 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.892 ************************************ 00:25:05.892 END TEST nvmf_multiconnection 00:25:05.892 ************************************ 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.892 ************************************ 00:25:05.892 START TEST nvmf_initiator_timeout 00:25:05.892 ************************************ 00:25:05.892 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:05.892 * Looking for test storage... 00:25:05.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.893 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:07.794 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:07.794 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.794 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:07.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:07.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.795 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.053 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.053 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:08.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:25:08.054 00:25:08.054 --- 10.0.0.2 ping statistics --- 00:25:08.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.054 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:25:08.054 00:25:08.054 --- 10.0.0.1 ping statistics --- 00:25:08.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.054 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1799910 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1799910 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1799910 ']' 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.054 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.054 [2024-07-23 06:21:01.269988] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:25:08.054 [2024-07-23 06:21:01.270079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.054 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.054 [2024-07-23 06:21:01.309839] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:08.054 [2024-07-23 06:21:01.337275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.312 [2024-07-23 06:21:01.426306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.312 [2024-07-23 06:21:01.426356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.312 [2024-07-23 06:21:01.426379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.312 [2024-07-23 06:21:01.426390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.312 [2024-07-23 06:21:01.426400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.312 [2024-07-23 06:21:01.428632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.312 [2024-07-23 06:21:01.428704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.312 [2024-07-23 06:21:01.428781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.312 [2024-07-23 06:21:01.428784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.312 Malloc0 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.312 Delay0 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.312 [2024-07-23 06:21:01.610458] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.312 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:08.313 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.313 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.313 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.313 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.313 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.313 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.313 [2024-07-23 06:21:01.638767] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.313 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.313 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:09.253 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:09.253 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:09.253 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.253 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:09.253 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1800316 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:11.154 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:11.154 [global] 00:25:11.154 thread=1 00:25:11.154 invalidate=1 00:25:11.154 rw=write 00:25:11.154 time_based=1 00:25:11.154 runtime=60 00:25:11.154 ioengine=libaio 00:25:11.154 direct=1 00:25:11.154 bs=4096 00:25:11.154 iodepth=1 00:25:11.154 norandommap=0 00:25:11.154 numjobs=1 00:25:11.154 00:25:11.154 verify_dump=1 00:25:11.154 verify_backlog=512 00:25:11.154 verify_state_save=0 00:25:11.154 do_verify=1 00:25:11.154 verify=crc32c-intel 00:25:11.154 [job0] 00:25:11.154 filename=/dev/nvme0n1 00:25:11.154 Could not set queue depth (nvme0n1) 00:25:11.412 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:11.412 fio-3.35 00:25:11.412 Starting 1 thread 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:14.694 true 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:14.694 true 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:14.694 true 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:14.694 true 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.694 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.224 true 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.224 true 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.224 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.225 true 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.225 true 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:17.225 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1800316 00:25:55.941 [2024-07-23 06:21:48.044201] ctrlr.c:3737:nvmf_ctrlr_process_admin_cmd: *ERROR*: Admin command sent to disabled controller 00:26:14.065 00:26:14.065 job0: (groupid=0, jobs=1): err= 0: pid=1800387: Tue Jul 23 06:22:04 2024 00:26:14.065 read: IOPS=86, BW=345KiB/s (353kB/s)(20.2MiB/60022msec) 00:26:14.065 slat (usec): min=5, max=14149, avg=18.75, stdev=222.93 00:26:14.065 clat (usec): min=342, max=41212k, avg=11102.49, stdev=573203.58 00:26:14.065 lat (usec): min=348, max=41212k, avg=11121.24, stdev=573203.57 00:26:14.065 clat percentiles (usec): 00:26:14.065 | 1.00th=[ 355], 5.00th=[ 363], 10.00th=[ 367], 00:26:14.065 | 20.00th=[ 375], 30.00th=[ 383], 40.00th=[ 392], 00:26:14.065 | 50.00th=[ 400], 60.00th=[ 412], 70.00th=[ 429], 00:26:14.065 | 80.00th=[ 486], 90.00th=[ 529], 95.00th=[ 41157], 00:26:14.065 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:14.065 | 99.95th=[ 42730], 99.99th=[17112761] 00:26:14.065 write: IOPS=93, BW=375KiB/s (384kB/s)(22.0MiB/60022msec); 0 zone resets 00:26:14.065 slat (usec): min=6, max=29330, avg=25.93, stdev=390.71 00:26:14.065 clat (usec): min=224, max=1329, avg=412.99, stdev=90.74 00:26:14.065 lat (usec): min=230, max=29662, avg=438.92, stdev=401.58 00:26:14.065 clat percentiles (usec): 00:26:14.065 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 306], 00:26:14.065 | 30.00th=[ 392], 40.00th=[ 424], 50.00th=[ 437], 60.00th=[ 453], 00:26:14.065 | 70.00th=[ 478], 80.00th=[ 490], 90.00th=[ 502], 95.00th=[ 523], 00:26:14.065 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 627], 99.95th=[ 824], 00:26:14.065 | 99.99th=[ 1336] 00:26:14.065 bw ( KiB/s): min= 264, max= 5472, per=100.00%, avg=3754.67, stdev=1282.96, samples=12 00:26:14.065 iops : min= 66, max= 1368, avg=938.67, stdev=320.74, samples=12 00:26:14.065 lat (usec) : 250=2.84%, 500=83.03%, 750=10.75%, 1000=0.15% 00:26:14.065 lat (msec) : 2=0.05%, 50=3.18%, >=2000=0.01% 00:26:14.065 cpu : usr=0.25%, sys=0.41%, ctx=10808, majf=0, minf=2 00:26:14.065 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.065 issued rwts: total=5170,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.065 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:14.065 00:26:14.065 Run status group 0 (all jobs): 00:26:14.065 READ: bw=345KiB/s (353kB/s), 345KiB/s-345KiB/s (353kB/s-353kB/s), io=20.2MiB (21.2MB), run=60022-60022msec 00:26:14.065 WRITE: bw=375KiB/s (384kB/s), 375KiB/s-375KiB/s (384kB/s-384kB/s), io=22.0MiB (23.1MB), run=60022-60022msec 00:26:14.065 00:26:14.065 Disk stats (read/write): 00:26:14.065 nvme0n1: ios=5219/5632, merge=0/0, ticks=16795/2219, in_queue=19014, util=99.96% 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:14.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:14.065 nvmf hotplug test: fio successful as expected 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:14.065 rmmod nvme_tcp 00:26:14.065 rmmod nvme_fabrics 00:26:14.065 rmmod nvme_keyring 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1799910 ']' 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1799910 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1799910 ']' 00:26:14.065 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1799910 00:26:14.066 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:14.066 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:14.066 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1799910 00:26:14.066 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:14.066 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:14.066 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1799910' 00:26:14.066 killing process with pid 1799910 00:26:14.066 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1799910 00:26:14.066 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1799910 00:26:14.066 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:14.066 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:14.066 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:14.066 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:14.066 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:14.066 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.066 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.066 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:14.066 00:26:14.066 real 1m8.096s 00:26:14.066 user 4m9.088s 00:26:14.066 sys 0m7.286s 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.066 ************************************ 00:26:14.066 END TEST nvmf_initiator_timeout 00:26:14.066 ************************************ 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.066 06:22:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.965 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:15.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:15.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:15.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:15.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:15.966 ************************************ 00:26:15.966 START TEST nvmf_perf_adq 00:26:15.966 ************************************ 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:15.966 * Looking for test storage... 00:26:15.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:15.966 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.967 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.967 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.967 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:15.967 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:15.967 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:15.967 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:15.967 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:15.967 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:18.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:18.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.496 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:18.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:18.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:18.497 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:18.756 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:20.663 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:25.943 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.943 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:25.944 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:25.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:25.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.944 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:25.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:26:25.944 00:26:25.944 --- 10.0.0.2 ping statistics --- 00:26:25.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.944 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:26:25.944 00:26:25.944 --- 10.0.0.1 ping statistics --- 00:26:25.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.944 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1812561 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1812561 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1812561 ']' 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.944 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:25.944 [2024-07-23 06:22:19.164695] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:26:25.944 [2024-07-23 06:22:19.164787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.944 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.944 [2024-07-23 06:22:19.204405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:25.944 [2024-07-23 06:22:19.232360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.203 [2024-07-23 06:22:19.321349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.203 [2024-07-23 06:22:19.321424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.203 [2024-07-23 06:22:19.321451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.203 [2024-07-23 06:22:19.321462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.203 [2024-07-23 06:22:19.321472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.203 [2024-07-23 06:22:19.321558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.203 [2024-07-23 06:22:19.321630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.203 [2024-07-23 06:22:19.321698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.203 [2024-07-23 06:22:19.321701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.203 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.203 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.204 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.462 [2024-07-23 06:22:19.554565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.462 Malloc1 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.462 [2024-07-23 06:22:19.605692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1812595 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:26.462 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:26.462 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:28.367 "tick_rate": 2700000000, 00:26:28.367 "poll_groups": [ 00:26:28.367 { 00:26:28.367 "name": "nvmf_tgt_poll_group_000", 00:26:28.367 "admin_qpairs": 1, 00:26:28.367 "io_qpairs": 1, 00:26:28.367 "current_admin_qpairs": 1, 00:26:28.367 "current_io_qpairs": 1, 00:26:28.367 "pending_bdev_io": 0, 00:26:28.367 "completed_nvme_io": 18548, 00:26:28.367 "transports": [ 00:26:28.367 { 00:26:28.367 "trtype": "TCP" 00:26:28.367 } 00:26:28.367 ] 00:26:28.367 }, 00:26:28.367 { 00:26:28.367 "name": "nvmf_tgt_poll_group_001", 00:26:28.367 "admin_qpairs": 0, 00:26:28.367 "io_qpairs": 1, 00:26:28.367 "current_admin_qpairs": 0, 00:26:28.367 "current_io_qpairs": 1, 00:26:28.367 "pending_bdev_io": 0, 00:26:28.367 "completed_nvme_io": 18494, 00:26:28.367 "transports": [ 00:26:28.367 { 00:26:28.367 "trtype": "TCP" 00:26:28.367 } 00:26:28.367 ] 00:26:28.367 }, 00:26:28.367 { 00:26:28.367 "name": "nvmf_tgt_poll_group_002", 00:26:28.367 "admin_qpairs": 0, 00:26:28.367 "io_qpairs": 1, 00:26:28.367 "current_admin_qpairs": 0, 00:26:28.367 "current_io_qpairs": 1, 00:26:28.367 "pending_bdev_io": 0, 00:26:28.367 "completed_nvme_io": 20041, 00:26:28.367 "transports": [ 00:26:28.367 { 00:26:28.367 "trtype": "TCP" 00:26:28.367 } 00:26:28.367 ] 00:26:28.367 }, 00:26:28.367 { 00:26:28.367 "name": "nvmf_tgt_poll_group_003", 00:26:28.367 "admin_qpairs": 0, 00:26:28.367 "io_qpairs": 1, 00:26:28.367 "current_admin_qpairs": 0, 00:26:28.367 "current_io_qpairs": 1, 00:26:28.367 "pending_bdev_io": 0, 00:26:28.367 "completed_nvme_io": 19557, 00:26:28.367 "transports": [ 00:26:28.367 { 00:26:28.367 "trtype": "TCP" 00:26:28.367 } 00:26:28.367 ] 00:26:28.367 } 00:26:28.367 ] 00:26:28.367 }' 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:28.367 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1812595 00:26:36.493 Initializing NVMe Controllers 00:26:36.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:36.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:36.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:36.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:36.493 Initialization complete. Launching workers. 00:26:36.493 ======================================================== 00:26:36.493 Latency(us) 00:26:36.493 Device Information : IOPS MiB/s Average min max 00:26:36.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10724.10 41.89 5968.30 1815.96 8968.46 00:26:36.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10199.90 39.84 6276.32 1981.66 10468.12 00:26:36.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10995.00 42.95 5822.09 1937.85 8220.96 00:26:36.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10168.90 39.72 6294.46 1966.40 8863.07 00:26:36.493 ======================================================== 00:26:36.493 Total : 42087.90 164.41 6083.56 1815.96 10468.12 00:26:36.493 00:26:36.493 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:36.493 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.493 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:36.493 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.493 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:36.493 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.493 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.493 rmmod nvme_tcp 00:26:36.753 rmmod nvme_fabrics 00:26:36.754 rmmod nvme_keyring 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1812561 ']' 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1812561 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1812561 ']' 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1812561 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1812561 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1812561' 00:26:36.754 killing process with pid 1812561 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1812561 00:26:36.754 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1812561 00:26:37.013 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:37.013 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:37.013 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:37.013 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.013 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:37.013 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.013 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.013 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.919 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.919 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:38.919 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:39.852 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:41.748 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:47.018 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:47.018 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:47.018 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.018 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:47.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:47.019 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:47.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:26:47.019 00:26:47.019 --- 10.0.0.2 ping statistics --- 00:26:47.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.019 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:26:47.019 00:26:47.019 --- 10.0.0.1 ping statistics --- 00:26:47.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.019 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:47.019 net.core.busy_poll = 1 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:47.019 net.core.busy_read = 1 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1815206 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1815206 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1815206 ']' 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.019 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.019 [2024-07-23 06:22:40.261331] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:26:47.019 [2024-07-23 06:22:40.261433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.019 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.019 [2024-07-23 06:22:40.299300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:47.019 [2024-07-23 06:22:40.329984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:47.278 [2024-07-23 06:22:40.421472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.278 [2024-07-23 06:22:40.421531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.278 [2024-07-23 06:22:40.421558] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.278 [2024-07-23 06:22:40.421572] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.278 [2024-07-23 06:22:40.421584] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.278 [2024-07-23 06:22:40.421673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.278 [2024-07-23 06:22:40.421743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.278 [2024-07-23 06:22:40.421841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:47.278 [2024-07-23 06:22:40.421844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.278 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.536 [2024-07-23 06:22:40.654005] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.536 Malloc1 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.536 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.537 [2024-07-23 06:22:40.705243] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1815342 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:47.537 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:47.537 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:49.435 "tick_rate": 2700000000, 00:26:49.435 "poll_groups": [ 00:26:49.435 { 00:26:49.435 "name": "nvmf_tgt_poll_group_000", 00:26:49.435 "admin_qpairs": 1, 00:26:49.435 "io_qpairs": 2, 00:26:49.435 "current_admin_qpairs": 1, 00:26:49.435 "current_io_qpairs": 2, 00:26:49.435 "pending_bdev_io": 0, 00:26:49.435 "completed_nvme_io": 27083, 00:26:49.435 "transports": [ 00:26:49.435 { 00:26:49.435 "trtype": "TCP" 00:26:49.435 } 00:26:49.435 ] 00:26:49.435 }, 00:26:49.435 { 00:26:49.435 "name": "nvmf_tgt_poll_group_001", 00:26:49.435 "admin_qpairs": 0, 00:26:49.435 "io_qpairs": 2, 00:26:49.435 "current_admin_qpairs": 0, 00:26:49.435 "current_io_qpairs": 2, 00:26:49.435 "pending_bdev_io": 0, 00:26:49.435 "completed_nvme_io": 21393, 00:26:49.435 "transports": [ 00:26:49.435 { 00:26:49.435 "trtype": "TCP" 00:26:49.435 } 00:26:49.435 ] 00:26:49.435 }, 00:26:49.435 { 00:26:49.435 "name": "nvmf_tgt_poll_group_002", 00:26:49.435 "admin_qpairs": 0, 00:26:49.435 "io_qpairs": 0, 00:26:49.435 "current_admin_qpairs": 0, 00:26:49.435 "current_io_qpairs": 0, 00:26:49.435 "pending_bdev_io": 0, 00:26:49.435 "completed_nvme_io": 0, 00:26:49.435 "transports": [ 00:26:49.435 { 00:26:49.435 "trtype": "TCP" 00:26:49.435 } 00:26:49.435 ] 00:26:49.435 }, 00:26:49.435 { 00:26:49.435 "name": "nvmf_tgt_poll_group_003", 00:26:49.435 "admin_qpairs": 0, 00:26:49.435 "io_qpairs": 0, 00:26:49.435 "current_admin_qpairs": 0, 00:26:49.435 "current_io_qpairs": 0, 00:26:49.435 "pending_bdev_io": 0, 00:26:49.435 "completed_nvme_io": 0, 00:26:49.435 "transports": [ 00:26:49.435 { 00:26:49.435 "trtype": "TCP" 00:26:49.435 } 00:26:49.435 ] 00:26:49.435 } 00:26:49.435 ] 00:26:49.435 }' 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:49.435 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1815342 00:26:57.555 Initializing NVMe Controllers 00:26:57.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:57.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:57.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:57.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:57.556 Initialization complete. Launching workers. 00:26:57.556 ======================================================== 00:26:57.556 Latency(us) 00:26:57.556 Device Information : IOPS MiB/s Average min max 00:26:57.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6679.20 26.09 9581.95 1928.34 54780.87 00:26:57.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6261.60 24.46 10225.20 2807.93 56052.62 00:26:57.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4883.30 19.08 13150.94 1957.91 60072.23 00:26:57.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7814.90 30.53 8214.83 1815.89 53850.01 00:26:57.556 ======================================================== 00:26:57.556 Total : 25638.99 100.15 10002.10 1815.89 60072.23 00:26:57.556 00:26:57.556 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:57.556 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:57.556 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:57.556 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:57.556 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:57.556 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:57.556 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:57.556 rmmod nvme_tcp 00:26:57.815 rmmod nvme_fabrics 00:26:57.815 rmmod nvme_keyring 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1815206 ']' 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1815206 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1815206 ']' 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1815206 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1815206 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1815206' 00:26:57.815 killing process with pid 1815206 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1815206 00:26:57.815 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1815206 00:26:58.075 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:58.075 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:58.075 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:58.075 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.075 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.075 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.075 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.075 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:01.362 00:27:01.362 real 0m45.088s 00:27:01.362 user 2m31.216s 00:27:01.362 sys 0m13.115s 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.362 ************************************ 00:27:01.362 END TEST nvmf_perf_adq 00:27:01.362 ************************************ 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:01.362 ************************************ 00:27:01.362 START TEST nvmf_shutdown 00:27:01.362 ************************************ 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:01.362 * Looking for test storage... 00:27:01.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.362 ************************************ 00:27:01.362 START TEST nvmf_shutdown_tc1 00:27:01.362 ************************************ 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.362 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:03.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:03.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:03.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:03.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.262 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:27:03.263 00:27:03.263 --- 10.0.0.2 ping statistics --- 00:27:03.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.263 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:27:03.263 00:27:03.263 --- 10.0.0.1 ping statistics --- 00:27:03.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.263 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1818523 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1818523 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1818523 ']' 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.263 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.263 [2024-07-23 06:22:56.407751] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:03.263 [2024-07-23 06:22:56.407838] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.263 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.263 [2024-07-23 06:22:56.444958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:03.263 [2024-07-23 06:22:56.475099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.263 [2024-07-23 06:22:56.567815] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.263 [2024-07-23 06:22:56.567864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.263 [2024-07-23 06:22:56.567880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.263 [2024-07-23 06:22:56.567894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.263 [2024-07-23 06:22:56.567906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.263 [2024-07-23 06:22:56.567990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.263 [2024-07-23 06:22:56.568086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.263 [2024-07-23 06:22:56.568134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:03.263 [2024-07-23 06:22:56.568137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.520 [2024-07-23 06:22:56.721879] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.520 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.521 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.521 Malloc1 00:27:03.521 [2024-07-23 06:22:56.796756] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.521 Malloc2 00:27:03.778 Malloc3 00:27:03.778 Malloc4 00:27:03.778 Malloc5 00:27:03.778 Malloc6 00:27:03.778 Malloc7 00:27:03.778 Malloc8 00:27:04.036 Malloc9 00:27:04.036 Malloc10 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1818700 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1818700 /var/tmp/bdevperf.sock 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1818700 ']' 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:04.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.036 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.036 { 00:27:04.036 "params": { 00:27:04.036 "name": "Nvme$subsystem", 00:27:04.036 "trtype": "$TEST_TRANSPORT", 00:27:04.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.036 "adrfam": "ipv4", 00:27:04.036 "trsvcid": "$NVMF_PORT", 00:27:04.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.036 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.037 { 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme$subsystem", 00:27:04.037 "trtype": "$TEST_TRANSPORT", 00:27:04.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "$NVMF_PORT", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.037 "hdgst": ${hdgst:-false}, 00:27:04.037 "ddgst": ${ddgst:-false} 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 } 00:27:04.037 EOF 00:27:04.037 )") 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:04.037 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:04.037 "params": { 00:27:04.037 "name": "Nvme1", 00:27:04.037 "trtype": "tcp", 00:27:04.037 "traddr": "10.0.0.2", 00:27:04.037 "adrfam": "ipv4", 00:27:04.037 "trsvcid": "4420", 00:27:04.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:04.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:04.037 "hdgst": false, 00:27:04.037 "ddgst": false 00:27:04.037 }, 00:27:04.037 "method": "bdev_nvme_attach_controller" 00:27:04.037 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme2", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme3", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme4", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme5", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme6", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme7", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme8", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme9", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 },{ 00:27:04.038 "params": { 00:27:04.038 "name": "Nvme10", 00:27:04.038 "trtype": "tcp", 00:27:04.038 "traddr": "10.0.0.2", 00:27:04.038 "adrfam": "ipv4", 00:27:04.038 "trsvcid": "4420", 00:27:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:04.038 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:04.038 "hdgst": false, 00:27:04.038 "ddgst": false 00:27:04.038 }, 00:27:04.038 "method": "bdev_nvme_attach_controller" 00:27:04.038 }' 00:27:04.038 [2024-07-23 06:22:57.295769] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:04.038 [2024-07-23 06:22:57.295846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:04.038 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.038 [2024-07-23 06:22:57.330324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:04.038 [2024-07-23 06:22:57.359249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.296 [2024-07-23 06:22:57.446787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1818700 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:06.193 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:07.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1818700 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1818523 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.128 { 00:27:07.128 "params": { 00:27:07.128 "name": "Nvme$subsystem", 00:27:07.128 "trtype": "$TEST_TRANSPORT", 00:27:07.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.128 "adrfam": "ipv4", 00:27:07.128 "trsvcid": "$NVMF_PORT", 00:27:07.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.128 "hdgst": ${hdgst:-false}, 00:27:07.128 "ddgst": ${ddgst:-false} 00:27:07.128 }, 00:27:07.128 "method": "bdev_nvme_attach_controller" 00:27:07.128 } 00:27:07.128 EOF 00:27:07.128 )") 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.128 { 00:27:07.128 "params": { 00:27:07.128 "name": "Nvme$subsystem", 00:27:07.128 "trtype": "$TEST_TRANSPORT", 00:27:07.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.128 "adrfam": "ipv4", 00:27:07.128 "trsvcid": "$NVMF_PORT", 00:27:07.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.128 "hdgst": ${hdgst:-false}, 00:27:07.128 "ddgst": ${ddgst:-false} 00:27:07.128 }, 00:27:07.128 "method": "bdev_nvme_attach_controller" 00:27:07.128 } 00:27:07.128 EOF 00:27:07.128 )") 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.128 { 00:27:07.128 "params": { 00:27:07.128 "name": "Nvme$subsystem", 00:27:07.128 "trtype": "$TEST_TRANSPORT", 00:27:07.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.128 "adrfam": "ipv4", 00:27:07.128 "trsvcid": "$NVMF_PORT", 00:27:07.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.128 "hdgst": ${hdgst:-false}, 00:27:07.128 "ddgst": ${ddgst:-false} 00:27:07.128 }, 00:27:07.128 "method": "bdev_nvme_attach_controller" 00:27:07.128 } 00:27:07.128 EOF 00:27:07.128 )") 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.128 { 00:27:07.128 "params": { 00:27:07.128 "name": "Nvme$subsystem", 00:27:07.128 "trtype": "$TEST_TRANSPORT", 00:27:07.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.128 "adrfam": "ipv4", 00:27:07.128 "trsvcid": "$NVMF_PORT", 00:27:07.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.128 "hdgst": ${hdgst:-false}, 00:27:07.128 "ddgst": ${ddgst:-false} 00:27:07.128 }, 00:27:07.128 "method": "bdev_nvme_attach_controller" 00:27:07.128 } 00:27:07.128 EOF 00:27:07.128 )") 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.128 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.128 { 00:27:07.128 "params": { 00:27:07.128 "name": "Nvme$subsystem", 00:27:07.128 "trtype": "$TEST_TRANSPORT", 00:27:07.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.128 "adrfam": "ipv4", 00:27:07.128 "trsvcid": "$NVMF_PORT", 00:27:07.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.129 "hdgst": ${hdgst:-false}, 00:27:07.129 "ddgst": ${ddgst:-false} 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 } 00:27:07.129 EOF 00:27:07.129 )") 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.129 { 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme$subsystem", 00:27:07.129 "trtype": "$TEST_TRANSPORT", 00:27:07.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "$NVMF_PORT", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.129 "hdgst": ${hdgst:-false}, 00:27:07.129 "ddgst": ${ddgst:-false} 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 } 00:27:07.129 EOF 00:27:07.129 )") 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.129 { 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme$subsystem", 00:27:07.129 "trtype": "$TEST_TRANSPORT", 00:27:07.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "$NVMF_PORT", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.129 "hdgst": ${hdgst:-false}, 00:27:07.129 "ddgst": ${ddgst:-false} 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 } 00:27:07.129 EOF 00:27:07.129 )") 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.129 { 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme$subsystem", 00:27:07.129 "trtype": "$TEST_TRANSPORT", 00:27:07.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "$NVMF_PORT", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.129 "hdgst": ${hdgst:-false}, 00:27:07.129 "ddgst": ${ddgst:-false} 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 } 00:27:07.129 EOF 00:27:07.129 )") 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.129 { 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme$subsystem", 00:27:07.129 "trtype": "$TEST_TRANSPORT", 00:27:07.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "$NVMF_PORT", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.129 "hdgst": ${hdgst:-false}, 00:27:07.129 "ddgst": ${ddgst:-false} 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 } 00:27:07.129 EOF 00:27:07.129 )") 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.129 { 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme$subsystem", 00:27:07.129 "trtype": "$TEST_TRANSPORT", 00:27:07.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "$NVMF_PORT", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.129 "hdgst": ${hdgst:-false}, 00:27:07.129 "ddgst": ${ddgst:-false} 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 } 00:27:07.129 EOF 00:27:07.129 )") 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:07.129 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme1", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 },{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme2", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 },{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme3", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 },{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme4", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 },{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme5", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 },{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme6", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 },{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme7", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 },{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme8", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.129 "method": "bdev_nvme_attach_controller" 00:27:07.129 },{ 00:27:07.129 "params": { 00:27:07.129 "name": "Nvme9", 00:27:07.129 "trtype": "tcp", 00:27:07.129 "traddr": "10.0.0.2", 00:27:07.129 "adrfam": "ipv4", 00:27:07.129 "trsvcid": "4420", 00:27:07.129 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:07.129 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:07.129 "hdgst": false, 00:27:07.129 "ddgst": false 00:27:07.129 }, 00:27:07.130 "method": "bdev_nvme_attach_controller" 00:27:07.130 },{ 00:27:07.130 "params": { 00:27:07.130 "name": "Nvme10", 00:27:07.130 "trtype": "tcp", 00:27:07.130 "traddr": "10.0.0.2", 00:27:07.130 "adrfam": "ipv4", 00:27:07.130 "trsvcid": "4420", 00:27:07.130 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:07.130 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:07.130 "hdgst": false, 00:27:07.130 "ddgst": false 00:27:07.130 }, 00:27:07.130 "method": "bdev_nvme_attach_controller" 00:27:07.130 }' 00:27:07.130 [2024-07-23 06:23:00.297753] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:07.130 [2024-07-23 06:23:00.297836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819116 ] 00:27:07.130 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.130 [2024-07-23 06:23:00.333576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:07.130 [2024-07-23 06:23:00.362162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.130 [2024-07-23 06:23:00.448980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.028 Running I/O for 1 seconds... 00:27:09.961 00:27:09.961 Latency(us) 00:27:09.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.961 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme1n1 : 1.06 241.37 15.09 0.00 0.00 262341.40 19223.89 246997.90 00:27:09.961 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme2n1 : 1.18 272.21 17.01 0.00 0.00 227117.59 19418.07 248551.35 00:27:09.961 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme3n1 : 1.18 216.78 13.55 0.00 0.00 283374.55 28350.39 287387.50 00:27:09.961 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme4n1 : 1.14 224.27 14.02 0.00 0.00 268998.73 22622.06 245444.46 00:27:09.961 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme5n1 : 1.11 230.77 14.42 0.00 0.00 256268.71 20000.62 254765.13 00:27:09.961 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme6n1 : 1.10 236.46 14.78 0.00 0.00 244394.29 2682.12 253211.69 00:27:09.961 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme7n1 : 1.19 214.82 13.43 0.00 0.00 266827.28 21651.15 276513.37 00:27:09.961 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme8n1 : 1.19 215.40 13.46 0.00 0.00 262572.94 24758.04 285834.05 00:27:09.961 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme9n1 : 1.20 266.01 16.63 0.00 0.00 209462.16 17087.91 246997.90 00:27:09.961 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.961 Verification LBA range: start 0x0 length 0x400 00:27:09.961 Nvme10n1 : 1.20 270.58 16.91 0.00 0.00 202248.19 1953.94 242337.56 00:27:09.961 =================================================================================================================== 00:27:09.961 Total : 2388.67 149.29 0.00 0.00 245823.63 1953.94 287387.50 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:10.218 rmmod nvme_tcp 00:27:10.218 rmmod nvme_fabrics 00:27:10.218 rmmod nvme_keyring 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1818523 ']' 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1818523 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1818523 ']' 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1818523 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1818523 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1818523' 00:27:10.218 killing process with pid 1818523 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1818523 00:27:10.218 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1818523 00:27:10.783 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.783 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.783 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.783 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.783 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.783 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.783 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.783 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.689 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.689 00:27:12.689 real 0m11.568s 00:27:12.689 user 0m33.422s 00:27:12.689 sys 0m3.233s 00:27:12.689 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.689 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.689 ************************************ 00:27:12.689 END TEST nvmf_shutdown_tc1 00:27:12.689 ************************************ 00:27:12.689 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:12.689 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:12.689 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:12.689 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.689 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.948 ************************************ 00:27:12.948 START TEST nvmf_shutdown_tc2 00:27:12.948 ************************************ 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:12.948 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:12.948 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.948 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:12.949 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:12.949 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:27:12.949 00:27:12.949 --- 10.0.0.2 ping statistics --- 00:27:12.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.949 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:27:12.949 00:27:12.949 --- 10.0.0.1 ping statistics --- 00:27:12.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.949 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1819879 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1819879 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1819879 ']' 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.949 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.949 [2024-07-23 06:23:06.276910] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:12.949 [2024-07-23 06:23:06.277007] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.207 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.207 [2024-07-23 06:23:06.317563] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:13.207 [2024-07-23 06:23:06.345342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.207 [2024-07-23 06:23:06.438431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.207 [2024-07-23 06:23:06.438487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.207 [2024-07-23 06:23:06.438500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.207 [2024-07-23 06:23:06.438511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.207 [2024-07-23 06:23:06.438522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.207 [2024-07-23 06:23:06.438604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.207 [2024-07-23 06:23:06.438655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.207 [2024-07-23 06:23:06.438737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:13.207 [2024-07-23 06:23:06.438739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.466 [2024-07-23 06:23:06.600191] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.466 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.466 Malloc1 00:27:13.466 [2024-07-23 06:23:06.685727] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.466 Malloc2 00:27:13.466 Malloc3 00:27:13.724 Malloc4 00:27:13.724 Malloc5 00:27:13.724 Malloc6 00:27:13.724 Malloc7 00:27:13.724 Malloc8 00:27:13.724 Malloc9 00:27:13.983 Malloc10 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1820053 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1820053 /var/tmp/bdevperf.sock 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1820053 ']' 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:13.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.983 { 00:27:13.983 "params": { 00:27:13.983 "name": "Nvme$subsystem", 00:27:13.983 "trtype": "$TEST_TRANSPORT", 00:27:13.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.983 "adrfam": "ipv4", 00:27:13.983 "trsvcid": "$NVMF_PORT", 00:27:13.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.983 "hdgst": ${hdgst:-false}, 00:27:13.983 "ddgst": ${ddgst:-false} 00:27:13.983 }, 00:27:13.983 "method": "bdev_nvme_attach_controller" 00:27:13.983 } 00:27:13.983 EOF 00:27:13.983 )") 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.983 { 00:27:13.983 "params": { 00:27:13.983 "name": "Nvme$subsystem", 00:27:13.983 "trtype": "$TEST_TRANSPORT", 00:27:13.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.983 "adrfam": "ipv4", 00:27:13.983 "trsvcid": "$NVMF_PORT", 00:27:13.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.983 "hdgst": ${hdgst:-false}, 00:27:13.983 "ddgst": ${ddgst:-false} 00:27:13.983 }, 00:27:13.983 "method": "bdev_nvme_attach_controller" 00:27:13.983 } 00:27:13.983 EOF 00:27:13.983 )") 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.983 { 00:27:13.983 "params": { 00:27:13.983 "name": "Nvme$subsystem", 00:27:13.983 "trtype": "$TEST_TRANSPORT", 00:27:13.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.983 "adrfam": "ipv4", 00:27:13.983 "trsvcid": "$NVMF_PORT", 00:27:13.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.983 "hdgst": ${hdgst:-false}, 00:27:13.983 "ddgst": ${ddgst:-false} 00:27:13.983 }, 00:27:13.983 "method": "bdev_nvme_attach_controller" 00:27:13.983 } 00:27:13.983 EOF 00:27:13.983 )") 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.983 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.983 { 00:27:13.983 "params": { 00:27:13.983 "name": "Nvme$subsystem", 00:27:13.983 "trtype": "$TEST_TRANSPORT", 00:27:13.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.983 "adrfam": "ipv4", 00:27:13.983 "trsvcid": "$NVMF_PORT", 00:27:13.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.983 "hdgst": ${hdgst:-false}, 00:27:13.983 "ddgst": ${ddgst:-false} 00:27:13.983 }, 00:27:13.983 "method": "bdev_nvme_attach_controller" 00:27:13.983 } 00:27:13.983 EOF 00:27:13.983 )") 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.984 { 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme$subsystem", 00:27:13.984 "trtype": "$TEST_TRANSPORT", 00:27:13.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "$NVMF_PORT", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.984 "hdgst": ${hdgst:-false}, 00:27:13.984 "ddgst": ${ddgst:-false} 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 } 00:27:13.984 EOF 00:27:13.984 )") 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.984 { 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme$subsystem", 00:27:13.984 "trtype": "$TEST_TRANSPORT", 00:27:13.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "$NVMF_PORT", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.984 "hdgst": ${hdgst:-false}, 00:27:13.984 "ddgst": ${ddgst:-false} 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 } 00:27:13.984 EOF 00:27:13.984 )") 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.984 { 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme$subsystem", 00:27:13.984 "trtype": "$TEST_TRANSPORT", 00:27:13.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "$NVMF_PORT", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.984 "hdgst": ${hdgst:-false}, 00:27:13.984 "ddgst": ${ddgst:-false} 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 } 00:27:13.984 EOF 00:27:13.984 )") 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.984 { 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme$subsystem", 00:27:13.984 "trtype": "$TEST_TRANSPORT", 00:27:13.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "$NVMF_PORT", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.984 "hdgst": ${hdgst:-false}, 00:27:13.984 "ddgst": ${ddgst:-false} 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 } 00:27:13.984 EOF 00:27:13.984 )") 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.984 { 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme$subsystem", 00:27:13.984 "trtype": "$TEST_TRANSPORT", 00:27:13.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "$NVMF_PORT", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.984 "hdgst": ${hdgst:-false}, 00:27:13.984 "ddgst": ${ddgst:-false} 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 } 00:27:13.984 EOF 00:27:13.984 )") 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.984 { 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme$subsystem", 00:27:13.984 "trtype": "$TEST_TRANSPORT", 00:27:13.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "$NVMF_PORT", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.984 "hdgst": ${hdgst:-false}, 00:27:13.984 "ddgst": ${ddgst:-false} 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 } 00:27:13.984 EOF 00:27:13.984 )") 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:13.984 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme1", 00:27:13.984 "trtype": "tcp", 00:27:13.984 "traddr": "10.0.0.2", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "4420", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:13.984 "hdgst": false, 00:27:13.984 "ddgst": false 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 },{ 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme2", 00:27:13.984 "trtype": "tcp", 00:27:13.984 "traddr": "10.0.0.2", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "4420", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:13.984 "hdgst": false, 00:27:13.984 "ddgst": false 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 },{ 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme3", 00:27:13.984 "trtype": "tcp", 00:27:13.984 "traddr": "10.0.0.2", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "4420", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:13.984 "hdgst": false, 00:27:13.984 "ddgst": false 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 },{ 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme4", 00:27:13.984 "trtype": "tcp", 00:27:13.984 "traddr": "10.0.0.2", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "4420", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:13.984 "hdgst": false, 00:27:13.984 "ddgst": false 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 },{ 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme5", 00:27:13.984 "trtype": "tcp", 00:27:13.984 "traddr": "10.0.0.2", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "4420", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:13.984 "hdgst": false, 00:27:13.984 "ddgst": false 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 },{ 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme6", 00:27:13.984 "trtype": "tcp", 00:27:13.984 "traddr": "10.0.0.2", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "4420", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:13.984 "hdgst": false, 00:27:13.984 "ddgst": false 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 },{ 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme7", 00:27:13.984 "trtype": "tcp", 00:27:13.984 "traddr": "10.0.0.2", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "4420", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:13.984 "hdgst": false, 00:27:13.984 "ddgst": false 00:27:13.984 }, 00:27:13.984 "method": "bdev_nvme_attach_controller" 00:27:13.984 },{ 00:27:13.984 "params": { 00:27:13.984 "name": "Nvme8", 00:27:13.984 "trtype": "tcp", 00:27:13.984 "traddr": "10.0.0.2", 00:27:13.984 "adrfam": "ipv4", 00:27:13.984 "trsvcid": "4420", 00:27:13.984 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:13.984 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:13.984 "hdgst": false, 00:27:13.984 "ddgst": false 00:27:13.984 }, 00:27:13.985 "method": "bdev_nvme_attach_controller" 00:27:13.985 },{ 00:27:13.985 "params": { 00:27:13.985 "name": "Nvme9", 00:27:13.985 "trtype": "tcp", 00:27:13.985 "traddr": "10.0.0.2", 00:27:13.985 "adrfam": "ipv4", 00:27:13.985 "trsvcid": "4420", 00:27:13.985 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:13.985 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:13.985 "hdgst": false, 00:27:13.985 "ddgst": false 00:27:13.985 }, 00:27:13.985 "method": "bdev_nvme_attach_controller" 00:27:13.985 },{ 00:27:13.985 "params": { 00:27:13.985 "name": "Nvme10", 00:27:13.985 "trtype": "tcp", 00:27:13.985 "traddr": "10.0.0.2", 00:27:13.985 "adrfam": "ipv4", 00:27:13.985 "trsvcid": "4420", 00:27:13.985 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:13.985 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:13.985 "hdgst": false, 00:27:13.985 "ddgst": false 00:27:13.985 }, 00:27:13.985 "method": "bdev_nvme_attach_controller" 00:27:13.985 }' 00:27:13.985 [2024-07-23 06:23:07.207941] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:13.985 [2024-07-23 06:23:07.208029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820053 ] 00:27:13.985 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.985 [2024-07-23 06:23:07.243301] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:13.985 [2024-07-23 06:23:07.272402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.243 [2024-07-23 06:23:07.358941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.140 Running I/O for 10 seconds... 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:16.140 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:16.398 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1820053 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1820053 ']' 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1820053 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1820053 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1820053' 00:27:16.656 killing process with pid 1820053 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1820053 00:27:16.656 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1820053 00:27:16.656 Received shutdown signal, test time was about 0.957758 seconds 00:27:16.656 00:27:16.656 Latency(us) 00:27:16.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.656 Verification LBA range: start 0x0 length 0x400 00:27:16.656 Nvme1n1 : 0.94 203.87 12.74 0.00 0.00 310103.86 24272.59 293601.28 00:27:16.657 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme2n1 : 0.94 273.41 17.09 0.00 0.00 226571.38 20194.80 254765.13 00:27:16.657 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme3n1 : 0.91 219.06 13.69 0.00 0.00 274066.11 6262.33 279620.27 00:27:16.657 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme4n1 : 0.93 278.87 17.43 0.00 0.00 211946.60 7330.32 242337.56 00:27:16.657 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme5n1 : 0.96 200.64 12.54 0.00 0.00 291053.16 25826.04 271853.04 00:27:16.657 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme6n1 : 0.92 209.30 13.08 0.00 0.00 271545.58 19223.89 265639.25 00:27:16.657 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme7n1 : 0.90 212.47 13.28 0.00 0.00 261203.44 20388.98 257872.02 00:27:16.657 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme8n1 : 0.94 271.13 16.95 0.00 0.00 201408.47 17961.72 251658.24 00:27:16.657 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme9n1 : 0.95 202.82 12.68 0.00 0.00 263731.83 39612.87 302921.96 00:27:16.657 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.657 Verification LBA range: start 0x0 length 0x400 00:27:16.657 Nvme10n1 : 0.93 206.75 12.92 0.00 0.00 251766.96 21262.79 265639.25 00:27:16.657 =================================================================================================================== 00:27:16.657 Total : 2278.33 142.40 0.00 0.00 252422.91 6262.33 302921.96 00:27:16.915 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:17.846 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1819879 00:27:17.846 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:17.846 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:17.846 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.104 rmmod nvme_tcp 00:27:18.104 rmmod nvme_fabrics 00:27:18.104 rmmod nvme_keyring 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1819879 ']' 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1819879 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1819879 ']' 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1819879 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1819879 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1819879' 00:27:18.104 killing process with pid 1819879 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1819879 00:27:18.104 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1819879 00:27:18.671 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.671 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.671 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.671 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.671 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.671 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.671 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.671 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.580 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.580 00:27:20.580 real 0m7.802s 00:27:20.580 user 0m23.563s 00:27:20.580 sys 0m1.552s 00:27:20.580 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.580 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.580 ************************************ 00:27:20.580 END TEST nvmf_shutdown_tc2 00:27:20.581 ************************************ 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:20.581 ************************************ 00:27:20.581 START TEST nvmf_shutdown_tc3 00:27:20.581 ************************************ 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:20.581 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:20.581 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:20.581 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:20.581 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.581 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.582 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.839 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.839 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.839 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.839 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:27:20.839 00:27:20.839 --- 10.0.0.2 ping statistics --- 00:27:20.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.839 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:27:20.839 00:27:20.839 --- 10.0.0.1 ping statistics --- 00:27:20.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.839 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1820960 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1820960 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1820960 ']' 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.839 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.839 [2024-07-23 06:23:14.129066] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:20.840 [2024-07-23 06:23:14.129159] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.840 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.840 [2024-07-23 06:23:14.166863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:21.097 [2024-07-23 06:23:14.197973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.097 [2024-07-23 06:23:14.285450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.097 [2024-07-23 06:23:14.285500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.097 [2024-07-23 06:23:14.285525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.097 [2024-07-23 06:23:14.285537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.097 [2024-07-23 06:23:14.285547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.097 [2024-07-23 06:23:14.285698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.097 [2024-07-23 06:23:14.285736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.097 [2024-07-23 06:23:14.285788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:21.097 [2024-07-23 06:23:14.285790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.097 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.097 [2024-07-23 06:23:14.437177] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.355 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.355 Malloc1 00:27:21.355 [2024-07-23 06:23:14.526766] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.355 Malloc2 00:27:21.355 Malloc3 00:27:21.355 Malloc4 00:27:21.612 Malloc5 00:27:21.612 Malloc6 00:27:21.612 Malloc7 00:27:21.612 Malloc8 00:27:21.612 Malloc9 00:27:21.612 Malloc10 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1821141 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1821141 /var/tmp/bdevperf.sock 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1821141 ']' 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:21.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.871 { 00:27:21.871 "params": { 00:27:21.871 "name": "Nvme$subsystem", 00:27:21.871 "trtype": "$TEST_TRANSPORT", 00:27:21.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.871 "adrfam": "ipv4", 00:27:21.871 "trsvcid": "$NVMF_PORT", 00:27:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.871 "hdgst": ${hdgst:-false}, 00:27:21.871 "ddgst": ${ddgst:-false} 00:27:21.871 }, 00:27:21.871 "method": "bdev_nvme_attach_controller" 00:27:21.871 } 00:27:21.871 EOF 00:27:21.871 )") 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.871 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.871 { 00:27:21.871 "params": { 00:27:21.871 "name": "Nvme$subsystem", 00:27:21.871 "trtype": "$TEST_TRANSPORT", 00:27:21.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.871 "adrfam": "ipv4", 00:27:21.871 "trsvcid": "$NVMF_PORT", 00:27:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.871 "hdgst": ${hdgst:-false}, 00:27:21.871 "ddgst": ${ddgst:-false} 00:27:21.871 }, 00:27:21.871 "method": "bdev_nvme_attach_controller" 00:27:21.871 } 00:27:21.871 EOF 00:27:21.871 )") 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.871 { 00:27:21.871 "params": { 00:27:21.871 "name": "Nvme$subsystem", 00:27:21.871 "trtype": "$TEST_TRANSPORT", 00:27:21.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.871 "adrfam": "ipv4", 00:27:21.871 "trsvcid": "$NVMF_PORT", 00:27:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.871 "hdgst": ${hdgst:-false}, 00:27:21.871 "ddgst": ${ddgst:-false} 00:27:21.871 }, 00:27:21.871 "method": "bdev_nvme_attach_controller" 00:27:21.871 } 00:27:21.871 EOF 00:27:21.871 )") 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.871 { 00:27:21.871 "params": { 00:27:21.871 "name": "Nvme$subsystem", 00:27:21.871 "trtype": "$TEST_TRANSPORT", 00:27:21.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.871 "adrfam": "ipv4", 00:27:21.871 "trsvcid": "$NVMF_PORT", 00:27:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.871 "hdgst": ${hdgst:-false}, 00:27:21.871 "ddgst": ${ddgst:-false} 00:27:21.871 }, 00:27:21.871 "method": "bdev_nvme_attach_controller" 00:27:21.871 } 00:27:21.871 EOF 00:27:21.871 )") 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.871 { 00:27:21.871 "params": { 00:27:21.871 "name": "Nvme$subsystem", 00:27:21.871 "trtype": "$TEST_TRANSPORT", 00:27:21.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.871 "adrfam": "ipv4", 00:27:21.871 "trsvcid": "$NVMF_PORT", 00:27:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.871 "hdgst": ${hdgst:-false}, 00:27:21.871 "ddgst": ${ddgst:-false} 00:27:21.871 }, 00:27:21.871 "method": "bdev_nvme_attach_controller" 00:27:21.871 } 00:27:21.871 EOF 00:27:21.871 )") 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.871 { 00:27:21.871 "params": { 00:27:21.871 "name": "Nvme$subsystem", 00:27:21.871 "trtype": "$TEST_TRANSPORT", 00:27:21.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.871 "adrfam": "ipv4", 00:27:21.871 "trsvcid": "$NVMF_PORT", 00:27:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.871 "hdgst": ${hdgst:-false}, 00:27:21.871 "ddgst": ${ddgst:-false} 00:27:21.871 }, 00:27:21.871 "method": "bdev_nvme_attach_controller" 00:27:21.871 } 00:27:21.871 EOF 00:27:21.871 )") 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.871 { 00:27:21.871 "params": { 00:27:21.871 "name": "Nvme$subsystem", 00:27:21.871 "trtype": "$TEST_TRANSPORT", 00:27:21.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.871 "adrfam": "ipv4", 00:27:21.871 "trsvcid": "$NVMF_PORT", 00:27:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.871 "hdgst": ${hdgst:-false}, 00:27:21.871 "ddgst": ${ddgst:-false} 00:27:21.871 }, 00:27:21.871 "method": "bdev_nvme_attach_controller" 00:27:21.871 } 00:27:21.871 EOF 00:27:21.871 )") 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.871 { 00:27:21.871 "params": { 00:27:21.871 "name": "Nvme$subsystem", 00:27:21.871 "trtype": "$TEST_TRANSPORT", 00:27:21.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.871 "adrfam": "ipv4", 00:27:21.871 "trsvcid": "$NVMF_PORT", 00:27:21.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.871 "hdgst": ${hdgst:-false}, 00:27:21.871 "ddgst": ${ddgst:-false} 00:27:21.871 }, 00:27:21.871 "method": "bdev_nvme_attach_controller" 00:27:21.871 } 00:27:21.871 EOF 00:27:21.871 )") 00:27:21.871 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.872 { 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme$subsystem", 00:27:21.872 "trtype": "$TEST_TRANSPORT", 00:27:21.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "$NVMF_PORT", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.872 "hdgst": ${hdgst:-false}, 00:27:21.872 "ddgst": ${ddgst:-false} 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 } 00:27:21.872 EOF 00:27:21.872 )") 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.872 { 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme$subsystem", 00:27:21.872 "trtype": "$TEST_TRANSPORT", 00:27:21.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "$NVMF_PORT", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.872 "hdgst": ${hdgst:-false}, 00:27:21.872 "ddgst": ${ddgst:-false} 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 } 00:27:21.872 EOF 00:27:21.872 )") 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:21.872 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme1", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme2", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme3", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme4", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme5", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme6", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme7", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme8", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme9", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 },{ 00:27:21.872 "params": { 00:27:21.872 "name": "Nvme10", 00:27:21.872 "trtype": "tcp", 00:27:21.872 "traddr": "10.0.0.2", 00:27:21.872 "adrfam": "ipv4", 00:27:21.872 "trsvcid": "4420", 00:27:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:21.872 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:21.872 "hdgst": false, 00:27:21.872 "ddgst": false 00:27:21.872 }, 00:27:21.872 "method": "bdev_nvme_attach_controller" 00:27:21.872 }' 00:27:21.872 [2024-07-23 06:23:15.041417] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:21.872 [2024-07-23 06:23:15.041505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821141 ] 00:27:21.872 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.872 [2024-07-23 06:23:15.076180] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:21.872 [2024-07-23 06:23:15.105227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.872 [2024-07-23 06:23:15.191815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.771 Running I/O for 10 seconds... 00:27:23.771 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.771 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:23.771 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:23.771 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.771 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:23.771 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:24.029 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1820960 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1820960 ']' 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1820960 00:27:24.287 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:27:24.563 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.563 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1820960 00:27:24.563 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:24.563 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:24.563 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1820960' 00:27:24.563 killing process with pid 1820960 00:27:24.563 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1820960 00:27:24.563 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1820960 00:27:24.563 [2024-07-23 06:23:17.662204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.662997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.663178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aaf0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.563 [2024-07-23 06:23:17.664726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.664995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.665417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d5f0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.666995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.667008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.667020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.667033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.667045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.564 [2024-07-23 06:23:17.667058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.667071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.667083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.667095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.667763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.565 [2024-07-23 06:23:17.667805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.667824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.565 [2024-07-23 06:23:17.667838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.667852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.565 [2024-07-23 06:23:17.667866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.667890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.565 [2024-07-23 06:23:17.667903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.667922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf88f10 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.668011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.565 [2024-07-23 06:23:17.668033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.668048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.565 [2024-07-23 06:23:17.668061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.668074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.565 [2024-07-23 06:23:17.668088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.668101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.565 [2024-07-23 06:23:17.668114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.668127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148ea0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.667108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.669998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.670018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.670030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.670044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.670056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.670069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.670083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.670096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.673486] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:24.565 [2024-07-23 06:23:17.673562] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:24.565 [2024-07-23 06:23:17.677860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf88f10 (9): Bad file descriptor 00:27:24.565 [2024-07-23 06:23:17.677951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1148ea0 (9): Bad file descriptor 00:27:24.565 [2024-07-23 06:23:17.670109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.681414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198afb0 is same with the state(5) to be set 00:27:24.565 [2024-07-23 06:23:17.686281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.565 [2024-07-23 06:23:17.686606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.565 [2024-07-23 06:23:17.686628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.686954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.686968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.566 [2024-07-23 06:23:17.687813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.566 [2024-07-23 06:23:17.687829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.687842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.687858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.687872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.687888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.687902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.687933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.687947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.687966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.687981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.687996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.567 [2024-07-23 06:23:17.688307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6f0 is same with the state(5) to be set 00:27:24.567 [2024-07-23 06:23:17.688405] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10ab6f0 was disconnected and freed. reset controller. 00:27:24.567 [2024-07-23 06:23:17.688948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.567 [2024-07-23 06:23:17.688973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.688989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.567 [2024-07-23 06:23:17.689002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.689016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.567 [2024-07-23 06:23:17.689030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.689043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.567 [2024-07-23 06:23:17.689056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.567 [2024-07-23 06:23:17.689069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154ad0 is same with the state(5) to be set 00:27:24.567 [2024-07-23 06:23:17.690484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.567 [2024-07-23 06:23:17.690520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.568 [2024-07-23 06:23:17.690903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.690916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.690928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.690940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.690953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.690965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.690992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b470 is same with the state(5) to be set 00:27:24.569 [2024-07-23 06:23:17.691560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:24.569 [2024-07-23 06:23:17.691601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154ad0 (9): Bad file descriptor 00:27:24.569 [2024-07-23 06:23:17.691678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.691970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.691986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.569 [2024-07-23 06:23:17.692409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.569 [2024-07-23 06:23:17.692424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.692970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.692986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b950 is same with the state(5) to be set 00:27:24.570 [2024-07-23 06:23:17.693448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 06:23:17.693461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 the state(5) to be set 00:27:24.570 [2024-07-23 06:23:17.693479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b950 is same with the state(5) to be set 00:27:24.570 [2024-07-23 06:23:17.693481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.570 [2024-07-23 06:23:17.693602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.570 [2024-07-23 06:23:17.693622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.693639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.693654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.693670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.693685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.693701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.693714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.693733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102cc70 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.694158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198be10 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.694188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198be10 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.694202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198be10 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.694215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198be10 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.694228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198be10 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.694240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198be10 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.694253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198be10 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.695349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1the state(5) to be set 00:27:24.571 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1[2024-07-23 06:23:17.695381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 06:23:17.695408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 06:23:17.695450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.695530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1the state(5) to be set 00:27:24.571 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.695641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(5) to be set 00:27:24.571 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.695673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:24.571 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.695754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1the state(5) to be set 00:27:24.571 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.695769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:24.571 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.571 [2024-07-23 06:23:17.695808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.571 [2024-07-23 06:23:17.695817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.571 [2024-07-23 06:23:17.695820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.695833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.695847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1the state(5) to be set 00:27:24.572 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.695864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.695885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.695890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.695904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-07-23 06:23:17.695917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 06:23:17.695932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.695976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.695989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.695997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.696026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1the state(5) to be set 00:27:24.572 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 06:23:17.696077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1[2024-07-23 06:23:17.696127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with [2024-07-23 06:23:17.696141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:24.572 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c2d0 is same with the state(5) to be set 00:27:24.572 [2024-07-23 06:23:17.696257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.572 [2024-07-23 06:23:17.696596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.572 [2024-07-23 06:23:17.696611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.696974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.696988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.573 [2024-07-23 06:23:17.697437] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf83660 was disconnected and freed. reset controller. 00:27:24.573 [2024-07-23 06:23:17.697587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.573 [2024-07-23 06:23:17.697702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.573 [2024-07-23 06:23:17.697719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.573 [2024-07-23 06:23:17.697734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.573 [2024-07-23 06:23:17.697750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with [2024-07-23 06:23:17.697750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:12the state(5) to be set 00:27:24.573 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.573 [2024-07-23 06:23:17.697766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.573 [2024-07-23 06:23:17.697782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.573 [2024-07-23 06:23:17.697797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.573 [2024-07-23 06:23:17.697803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.573 [2024-07-23 06:23:17.697812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.573 [2024-07-23 06:23:17.697815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 06:23:17.697828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.697855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.697868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.697880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.697907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.697920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.697933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with [2024-07-23 06:23:17.697945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:1the state(5) to be set 00:27:24.574 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.697959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.697972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.697985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.697994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.697997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with [2024-07-23 06:23:17.698009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:1the state(5) to be set 00:27:24.574 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.698024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.698036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.698049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.698061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.698079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.698092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.698105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.698133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf84af0 is same w[2024-07-23 06:23:17.698147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with ith the state(5) to be set 00:27:24.574 the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698217] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf84af0 was disconnected and freed. reset controller. 00:27:24.574 [2024-07-23 06:23:17.698227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c790 is same with the state(5) to be set 00:27:24.574 [2024-07-23 06:23:17.698631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.698655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.698677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.698694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.698710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.698725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.574 [2024-07-23 06:23:17.698741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.574 [2024-07-23 06:23:17.698755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.698772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.698787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.698804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.698818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.698835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.698857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.698874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.698898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.698916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.698931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.698947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.698961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.698977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:12[2024-07-23 06:23:17.699662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with [2024-07-23 06:23:17.699677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:24.575 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 06:23:17.699742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.575 [2024-07-23 06:23:17.699794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.575 [2024-07-23 06:23:17.699807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.575 [2024-07-23 06:23:17.699819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:12[2024-07-23 06:23:17.699819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with [2024-07-23 06:23:17.699834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:24.576 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.576 [2024-07-23 06:23:17.699851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 [2024-07-23 06:23:17.699864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.576 [2024-07-23 06:23:17.699877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with [2024-07-23 06:23:17.699889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:12the state(5) to be set 00:27:24.576 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 [2024-07-23 06:23:17.699904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.576 [2024-07-23 06:23:17.699916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 [2024-07-23 06:23:17.699929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with [2024-07-23 06:23:17.699941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:24.576 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.576 [2024-07-23 06:23:17.699955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 [2024-07-23 06:23:17.699968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.576 [2024-07-23 06:23:17.699980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.699990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 [2024-07-23 06:23:17.699993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 06:23:17.700005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.576 the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 [2024-07-23 06:23:17.700032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with [2024-07-23 06:23:17.700036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:24.576 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.576 [2024-07-23 06:23:17.700051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 [2024-07-23 06:23:17.700065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.576 [2024-07-23 06:23:17.700078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.576 [2024-07-23 06:23:17.700091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.700393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cc70 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.576 [2024-07-23 06:23:17.701400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.701979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d130 is same with the state(5) to be set 00:27:24.577 [2024-07-23 06:23:17.712373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.577 [2024-07-23 06:23:17.712927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.577 [2024-07-23 06:23:17.712941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.712957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.578 [2024-07-23 06:23:17.712971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.712987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.578 [2024-07-23 06:23:17.713001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.713017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.578 [2024-07-23 06:23:17.713032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.713052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.578 [2024-07-23 06:23:17.713067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.713083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2920 is same with the state(5) to be set 00:27:24.578 [2024-07-23 06:23:17.714827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.578 [2024-07-23 06:23:17.714877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:24.578 [2024-07-23 06:23:17.715038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7e610 is same with the state(5) to be set 00:27:24.578 [2024-07-23 06:23:17.715213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7ff0 is same with the state(5) to be set 00:27:24.578 [2024-07-23 06:23:17.715395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115c740 is same with the state(5) to be set 00:27:24.578 [2024-07-23 06:23:17.715568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcb470 is same with the state(5) to be set 00:27:24.578 [2024-07-23 06:23:17.715745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfab510 is same with the state(5) to be set 00:27:24.578 [2024-07-23 06:23:17.715913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.715982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.715996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.716011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.716024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.716037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11273b0 is same with the state(5) to be set 00:27:24.578 [2024-07-23 06:23:17.716087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.716108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.716123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.716138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.716152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.716166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.716181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.578 [2024-07-23 06:23:17.716195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.578 [2024-07-23 06:23:17.716208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1126e10 is same with the state(5) to be set 00:27:24.578 [2024-07-23 06:23:17.718943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:24.578 [2024-07-23 06:23:17.718979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:24.578 [2024-07-23 06:23:17.719007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcb470 (9): Bad file descriptor 00:27:24.578 [2024-07-23 06:23:17.719031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb7ff0 (9): Bad file descriptor 00:27:24.578 [2024-07-23 06:23:17.719220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-23 06:23:17.719249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1154ad0 with addr=10.0.0.2, port=4420 00:27:24.578 [2024-07-23 06:23:17.719266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154ad0 is same with the state(5) to be set 00:27:24.579 [2024-07-23 06:23:17.719422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.579 [2024-07-23 06:23:17.719448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf88f10 with addr=10.0.0.2, port=4420 00:27:24.579 [2024-07-23 06:23:17.719464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf88f10 is same with the state(5) to be set 00:27:24.579 [2024-07-23 06:23:17.719630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.579 [2024-07-23 06:23:17.719657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1148ea0 with addr=10.0.0.2, port=4420 00:27:24.579 [2024-07-23 06:23:17.719674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148ea0 is same with the state(5) to be set 00:27:24.579 [2024-07-23 06:23:17.720576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154ad0 (9): Bad file descriptor 00:27:24.579 [2024-07-23 06:23:17.720606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf88f10 (9): Bad file descriptor 00:27:24.579 [2024-07-23 06:23:17.720635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1148ea0 (9): Bad file descriptor 00:27:24.579 [2024-07-23 06:23:17.721844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.579 [2024-07-23 06:23:17.721874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb7ff0 with addr=10.0.0.2, port=4420 00:27:24.579 [2024-07-23 06:23:17.721892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7ff0 is same with the state(5) to be set 00:27:24.579 [2024-07-23 06:23:17.722042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.579 [2024-07-23 06:23:17.722068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcb470 with addr=10.0.0.2, port=4420 00:27:24.579 [2024-07-23 06:23:17.722084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcb470 is same with the state(5) to be set 00:27:24.579 [2024-07-23 06:23:17.722101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:24.579 [2024-07-23 06:23:17.722114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:24.579 [2024-07-23 06:23:17.722130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:24.579 [2024-07-23 06:23:17.722152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.579 [2024-07-23 06:23:17.722167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.579 [2024-07-23 06:23:17.722180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.579 [2024-07-23 06:23:17.722198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:24.579 [2024-07-23 06:23:17.722212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:24.579 [2024-07-23 06:23:17.722225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:24.579 [2024-07-23 06:23:17.722304] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:24.579 [2024-07-23 06:23:17.722398] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:24.579 [2024-07-23 06:23:17.722606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.579 [2024-07-23 06:23:17.722638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.579 [2024-07-23 06:23:17.722652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.579 [2024-07-23 06:23:17.722669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb7ff0 (9): Bad file descriptor 00:27:24.579 [2024-07-23 06:23:17.722689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcb470 (9): Bad file descriptor 00:27:24.579 [2024-07-23 06:23:17.722796] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:24.579 [2024-07-23 06:23:17.722876] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:24.579 [2024-07-23 06:23:17.722905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:24.579 [2024-07-23 06:23:17.722927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:24.579 [2024-07-23 06:23:17.722942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:24.579 [2024-07-23 06:23:17.722962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:24.579 [2024-07-23 06:23:17.722977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:24.579 [2024-07-23 06:23:17.722990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:24.579 [2024-07-23 06:23:17.723075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.579 [2024-07-23 06:23:17.723796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.579 [2024-07-23 06:23:17.723812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.723827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.723848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.723863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.723880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.723894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.723911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.723925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.723942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.723957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.723973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.723987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.724971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.724985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.725001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.725020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.725037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.580 [2024-07-23 06:23:17.725053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.580 [2024-07-23 06:23:17.725069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.725084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.725100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.725114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.725129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b03c0 is same with the state(5) to be set 00:27:24.581 [2024-07-23 06:23:17.725210] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10b03c0 was disconnected and freed. reset controller. 00:27:24.581 [2024-07-23 06:23:17.725253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.581 [2024-07-23 06:23:17.725276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.581 [2024-07-23 06:23:17.725329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7e610 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.725366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115c740 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.725397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfab510 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.725426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11273b0 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.725458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1126e10 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.726699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:24.581 [2024-07-23 06:23:17.726781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:24.581 [2024-07-23 06:23:17.726806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.581 [2024-07-23 06:23:17.726823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:24.581 [2024-07-23 06:23:17.727002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-23 06:23:17.727031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1126e10 with addr=10.0.0.2, port=4420 00:27:24.581 [2024-07-23 06:23:17.727048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1126e10 is same with the state(5) to be set 00:27:24.581 [2024-07-23 06:23:17.727486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-23 06:23:17.727513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1148ea0 with addr=10.0.0.2, port=4420 00:27:24.581 [2024-07-23 06:23:17.727529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148ea0 is same with the state(5) to be set 00:27:24.581 [2024-07-23 06:23:17.727678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-23 06:23:17.727704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf88f10 with addr=10.0.0.2, port=4420 00:27:24.581 [2024-07-23 06:23:17.727720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf88f10 is same with the state(5) to be set 00:27:24.581 [2024-07-23 06:23:17.727866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-23 06:23:17.727891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1154ad0 with addr=10.0.0.2, port=4420 00:27:24.581 [2024-07-23 06:23:17.727907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154ad0 is same with the state(5) to be set 00:27:24.581 [2024-07-23 06:23:17.727927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1126e10 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.727988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1148ea0 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.728012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf88f10 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.728031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154ad0 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.728047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:24.581 [2024-07-23 06:23:17.728060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:24.581 [2024-07-23 06:23:17.728075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:24.581 [2024-07-23 06:23:17.728134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.581 [2024-07-23 06:23:17.728154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:24.581 [2024-07-23 06:23:17.728168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:24.581 [2024-07-23 06:23:17.728182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:24.581 [2024-07-23 06:23:17.728200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.581 [2024-07-23 06:23:17.728214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.581 [2024-07-23 06:23:17.728227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.581 [2024-07-23 06:23:17.728243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:24.581 [2024-07-23 06:23:17.728258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:24.581 [2024-07-23 06:23:17.728271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:24.581 [2024-07-23 06:23:17.728314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.581 [2024-07-23 06:23:17.728332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.581 [2024-07-23 06:23:17.728344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.581 [2024-07-23 06:23:17.730681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:24.581 [2024-07-23 06:23:17.730752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:24.581 [2024-07-23 06:23:17.730915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-23 06:23:17.730944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcb470 with addr=10.0.0.2, port=4420 00:27:24.581 [2024-07-23 06:23:17.730961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcb470 is same with the state(5) to be set 00:27:24.581 [2024-07-23 06:23:17.731127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-23 06:23:17.731154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb7ff0 with addr=10.0.0.2, port=4420 00:27:24.581 [2024-07-23 06:23:17.731177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7ff0 is same with the state(5) to be set 00:27:24.581 [2024-07-23 06:23:17.731197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcb470 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.731245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb7ff0 (9): Bad file descriptor 00:27:24.581 [2024-07-23 06:23:17.731265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:24.581 [2024-07-23 06:23:17.731279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:24.581 [2024-07-23 06:23:17.731293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:24.581 [2024-07-23 06:23:17.731339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.581 [2024-07-23 06:23:17.731357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:24.581 [2024-07-23 06:23:17.731370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:24.581 [2024-07-23 06:23:17.731383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:24.581 [2024-07-23 06:23:17.731427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.581 [2024-07-23 06:23:17.735416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.581 [2024-07-23 06:23:17.735777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.581 [2024-07-23 06:23:17.735794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.735809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.735825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.735839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.735856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.735880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.735895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.735910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.735926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.735940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.735956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.735970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.735987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.582 [2024-07-23 06:23:17.736868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.582 [2024-07-23 06:23:17.736882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.736898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.736921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.736938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.736952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.736968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.736982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.736998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.737472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.737488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acb80 is same with the state(5) to be set 00:27:24.583 [2024-07-23 06:23:17.738816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.738840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.738871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.738887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.738904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.738918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.738934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.738949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.738965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.738979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.738995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.583 [2024-07-23 06:23:17.739432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.583 [2024-07-23 06:23:17.739447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.739974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.739988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.584 [2024-07-23 06:23:17.740684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.584 [2024-07-23 06:23:17.740700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.740715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.740731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.740746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.740762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.740776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.740807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.740823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.740837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.740852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ad9d0 is same with the state(5) to be set 00:27:24.585 [2024-07-23 06:23:17.742111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.742971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.742987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.743001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.743018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.743032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.743049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.743068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.743084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.743099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.743115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.743129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.743145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.743158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.743174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.585 [2024-07-23 06:23:17.743189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.585 [2024-07-23 06:23:17.743204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.743977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.743994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.744008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.744024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.744038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.744053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.744067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.744083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.744097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.744112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.744126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.744141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aeea0 is same with the state(5) to be set 00:27:24.586 [2024-07-23 06:23:17.745405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.745428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.745450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.745466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.745483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.745501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.586 [2024-07-23 06:23:17.745518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.586 [2024-07-23 06:23:17.745532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.745974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.745989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.587 [2024-07-23 06:23:17.746770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.587 [2024-07-23 06:23:17.746786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.746800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.746816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.746830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.746846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.746867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.746883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.746897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.746913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.746927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.746943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.746958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.746974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.746988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.588 [2024-07-23 06:23:17.747421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.588 [2024-07-23 06:23:17.747435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1440 is same with the state(5) to be set 00:27:24.588 [2024-07-23 06:23:17.749718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:24.588 [2024-07-23 06:23:17.749752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:24.588 [2024-07-23 06:23:17.749777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:24.588 task offset: 8192 on job bdev=Nvme2n1 fails 00:27:24.588 00:27:24.588 Latency(us) 00:27:24.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.588 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme1n1 ended in about 0.94 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme1n1 : 0.94 135.86 8.49 67.93 0.00 310669.46 23301.69 310689.19 00:27:24.588 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme2n1 ended in about 0.94 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme2n1 : 0.94 68.27 4.27 68.27 0.00 454802.39 42913.94 487782.02 00:27:24.588 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme3n1 ended in about 0.99 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme3n1 : 0.99 259.68 16.23 64.92 0.00 187703.45 18155.90 251658.24 00:27:24.588 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme4n1 ended in about 0.96 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme4n1 : 0.96 198.98 12.44 66.33 0.00 224955.73 21651.15 274959.93 00:27:24.588 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme5n1 ended in about 0.97 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme5n1 : 0.97 265.06 16.57 16.57 0.00 202903.17 19515.16 219035.88 00:27:24.588 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme6n1 ended in about 0.99 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme6n1 : 0.99 64.70 4.04 64.70 0.00 444235.47 49321.91 431857.97 00:27:24.588 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme7n1 ended in about 0.99 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme7n1 : 0.99 257.96 16.12 64.49 0.00 174614.76 19223.89 201947.97 00:27:24.588 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme8n1 ended in about 0.97 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme8n1 : 0.97 197.16 12.32 65.72 0.00 209307.12 26214.40 231463.44 00:27:24.588 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme9n1 ended in about 1.00 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme9n1 : 1.00 128.55 8.03 64.28 0.00 280499.14 26214.40 264085.81 00:27:24.588 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:24.588 Job: Nvme10n1 ended in about 0.96 seconds with error 00:27:24.588 Verification LBA range: start 0x0 length 0x400 00:27:24.588 Nvme10n1 : 0.96 66.57 4.16 66.57 0.00 395182.84 41554.68 416323.51 00:27:24.588 =================================================================================================================== 00:27:24.588 Total : 1642.79 102.67 609.77 0.00 256143.89 18155.90 487782.02 00:27:24.588 [2024-07-23 06:23:17.778553] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:24.588 [2024-07-23 06:23:17.778641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:24.588 [2024-07-23 06:23:17.779246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-23 06:23:17.779284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115c740 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-07-23 06:23:17.779304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115c740 is same with the state(5) to be set 00:27:24.588 [2024-07-23 06:23:17.779480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-23 06:23:17.779521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7e610 with addr=10.0.0.2, port=4420 00:27:24.588 [2024-07-23 06:23:17.779538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7e610 is same with the state(5) to be set 00:27:24.588 [2024-07-23 06:23:17.779678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-23 06:23:17.779706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfab510 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-07-23 06:23:17.779723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfab510 is same with the state(5) to be set 00:27:24.589 [2024-07-23 06:23:17.779889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-23 06:23:17.779916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11273b0 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-07-23 06:23:17.779932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11273b0 is same with the state(5) to be set 00:27:24.589 [2024-07-23 06:23:17.779965] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:24.589 [2024-07-23 06:23:17.779988] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:24.589 [2024-07-23 06:23:17.780007] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:24.589 [2024-07-23 06:23:17.780025] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:24.589 [2024-07-23 06:23:17.780043] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:24.589 [2024-07-23 06:23:17.780060] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:24.589 [2024-07-23 06:23:17.781115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:24.589 [2024-07-23 06:23:17.781144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:24.589 [2024-07-23 06:23:17.781162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.589 [2024-07-23 06:23:17.781179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:24.589 [2024-07-23 06:23:17.781195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:24.589 [2024-07-23 06:23:17.781210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:24.589 [2024-07-23 06:23:17.781298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115c740 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.781327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7e610 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.781345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfab510 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.781362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11273b0 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.781872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-23 06:23:17.781903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1126e10 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-07-23 06:23:17.781920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1126e10 is same with the state(5) to be set 00:27:24.589 [2024-07-23 06:23:17.782068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-23 06:23:17.782093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1154ad0 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-07-23 06:23:17.782115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154ad0 is same with the state(5) to be set 00:27:24.589 [2024-07-23 06:23:17.782254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-23 06:23:17.782281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf88f10 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-07-23 06:23:17.782297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf88f10 is same with the state(5) to be set 00:27:24.589 [2024-07-23 06:23:17.782437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-23 06:23:17.782464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1148ea0 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-07-23 06:23:17.782481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148ea0 is same with the state(5) to be set 00:27:24.589 [2024-07-23 06:23:17.782633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-23 06:23:17.782664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcb470 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-07-23 06:23:17.782680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcb470 is same with the state(5) to be set 00:27:24.589 [2024-07-23 06:23:17.782839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-23 06:23:17.782874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb7ff0 with addr=10.0.0.2, port=4420 00:27:24.589 [2024-07-23 06:23:17.782891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7ff0 is same with the state(5) to be set 00:27:24.589 [2024-07-23 06:23:17.782906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.782919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.782936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:24.589 [2024-07-23 06:23:17.782956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.782971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.782984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.783014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.783027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.783057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.783070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1126e10 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.783234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154ad0 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.783253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf88f10 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.783270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1148ea0 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.783287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcb470 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.783304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb7ff0 (9): Bad file descriptor 00:27:24.589 [2024-07-23 06:23:17.783358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.783379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.783394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.783424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.783438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.783467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.783481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.783510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.783524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.783553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.783566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:24.589 [2024-07-23 06:23:17.783597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:24.589 [2024-07-23 06:23:17.783610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:24.589 [2024-07-23 06:23:17.783665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.589 [2024-07-23 06:23:17.783733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.160 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:25.160 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1821141 00:27:26.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1821141) - No such process 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.097 rmmod nvme_tcp 00:27:26.097 rmmod nvme_fabrics 00:27:26.097 rmmod nvme_keyring 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.097 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.631 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.631 00:27:28.631 real 0m7.471s 00:27:28.631 user 0m17.714s 00:27:28.631 sys 0m1.552s 00:27:28.631 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:28.632 ************************************ 00:27:28.632 END TEST nvmf_shutdown_tc3 00:27:28.632 ************************************ 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:28.632 00:27:28.632 real 0m27.059s 00:27:28.632 user 1m14.783s 00:27:28.632 sys 0m6.484s 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:28.632 ************************************ 00:27:28.632 END TEST nvmf_shutdown 00:27:28.632 ************************************ 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:28.632 00:27:28.632 real 16m46.996s 00:27:28.632 user 46m47.521s 00:27:28.632 sys 3m58.982s 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.632 06:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:28.632 ************************************ 00:27:28.632 END TEST nvmf_target_extra 00:27:28.632 ************************************ 00:27:28.632 06:23:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:28.632 06:23:21 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:28.632 06:23:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:28.632 06:23:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.632 06:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.632 ************************************ 00:27:28.632 START TEST nvmf_host 00:27:28.632 ************************************ 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:28.632 * Looking for test storage... 00:27:28.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.632 ************************************ 00:27:28.632 START TEST nvmf_multicontroller 00:27:28.632 ************************************ 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:28.632 * Looking for test storage... 00:27:28.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.632 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:28.633 06:23:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:30.538 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:30.538 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:30.538 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:30.538 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:30.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:27:30.538 00:27:30.538 --- 10.0.0.2 ping statistics --- 00:27:30.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.538 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:27:30.538 00:27:30.538 --- 10.0.0.1 ping statistics --- 00:27:30.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.538 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1823573 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1823573 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1823573 ']' 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.538 06:23:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.538 [2024-07-23 06:23:23.785145] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:30.538 [2024-07-23 06:23:23.785226] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.538 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.538 [2024-07-23 06:23:23.823408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:30.538 [2024-07-23 06:23:23.849674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:30.796 [2024-07-23 06:23:23.945927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.796 [2024-07-23 06:23:23.945991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.796 [2024-07-23 06:23:23.946006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.796 [2024-07-23 06:23:23.946019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.796 [2024-07-23 06:23:23.946031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.796 [2024-07-23 06:23:23.946089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.796 [2024-07-23 06:23:23.946211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.796 [2024-07-23 06:23:23.946213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.796 [2024-07-23 06:23:24.084605] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.796 Malloc0 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.796 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.055 [2024-07-23 06:23:24.145065] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.055 [2024-07-23 06:23:24.152983] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.055 Malloc1 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1823714 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1823714 /var/tmp/bdevperf.sock 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1823714 ']' 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:31.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.055 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 NVMe0n1 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.314 1 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 request: 00:27:31.314 { 00:27:31.314 "name": "NVMe0", 00:27:31.314 "trtype": "tcp", 00:27:31.314 "traddr": "10.0.0.2", 00:27:31.314 "adrfam": "ipv4", 00:27:31.314 "trsvcid": "4420", 00:27:31.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.314 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:31.314 "hostaddr": "10.0.0.2", 00:27:31.314 "hostsvcid": "60000", 00:27:31.314 "prchk_reftag": false, 00:27:31.314 "prchk_guard": false, 00:27:31.314 "hdgst": false, 00:27:31.314 "ddgst": false, 00:27:31.314 "method": "bdev_nvme_attach_controller", 00:27:31.314 "req_id": 1 00:27:31.314 } 00:27:31.314 Got JSON-RPC error response 00:27:31.314 response: 00:27:31.314 { 00:27:31.314 "code": -114, 00:27:31.314 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:31.314 } 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 request: 00:27:31.314 { 00:27:31.314 "name": "NVMe0", 00:27:31.314 "trtype": "tcp", 00:27:31.314 "traddr": "10.0.0.2", 00:27:31.314 "adrfam": "ipv4", 00:27:31.314 "trsvcid": "4420", 00:27:31.314 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:31.314 "hostaddr": "10.0.0.2", 00:27:31.314 "hostsvcid": "60000", 00:27:31.314 "prchk_reftag": false, 00:27:31.314 "prchk_guard": false, 00:27:31.314 "hdgst": false, 00:27:31.314 "ddgst": false, 00:27:31.314 "method": "bdev_nvme_attach_controller", 00:27:31.314 "req_id": 1 00:27:31.314 } 00:27:31.314 Got JSON-RPC error response 00:27:31.314 response: 00:27:31.314 { 00:27:31.314 "code": -114, 00:27:31.314 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:31.314 } 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.314 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 request: 00:27:31.314 { 00:27:31.314 "name": "NVMe0", 00:27:31.314 "trtype": "tcp", 00:27:31.314 "traddr": "10.0.0.2", 00:27:31.314 "adrfam": "ipv4", 00:27:31.572 "trsvcid": "4420", 00:27:31.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.572 "hostaddr": "10.0.0.2", 00:27:31.572 "hostsvcid": "60000", 00:27:31.572 "prchk_reftag": false, 00:27:31.572 "prchk_guard": false, 00:27:31.572 "hdgst": false, 00:27:31.572 "ddgst": false, 00:27:31.572 "multipath": "disable", 00:27:31.572 "method": "bdev_nvme_attach_controller", 00:27:31.572 "req_id": 1 00:27:31.572 } 00:27:31.572 Got JSON-RPC error response 00:27:31.572 response: 00:27:31.572 { 00:27:31.572 "code": -114, 00:27:31.572 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:31.572 } 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.572 request: 00:27:31.572 { 00:27:31.572 "name": "NVMe0", 00:27:31.572 "trtype": "tcp", 00:27:31.572 "traddr": "10.0.0.2", 00:27:31.572 "adrfam": "ipv4", 00:27:31.572 "trsvcid": "4420", 00:27:31.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.572 "hostaddr": "10.0.0.2", 00:27:31.572 "hostsvcid": "60000", 00:27:31.572 "prchk_reftag": false, 00:27:31.572 "prchk_guard": false, 00:27:31.572 "hdgst": false, 00:27:31.572 "ddgst": false, 00:27:31.572 "multipath": "failover", 00:27:31.572 "method": "bdev_nvme_attach_controller", 00:27:31.572 "req_id": 1 00:27:31.572 } 00:27:31.572 Got JSON-RPC error response 00:27:31.572 response: 00:27:31.572 { 00:27:31.572 "code": -114, 00:27:31.572 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:31.572 } 00:27:31.572 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.573 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.573 06:23:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.830 00:27:31.830 06:23:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.830 06:23:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:31.830 06:23:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:31.830 06:23:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.830 06:23:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.830 06:23:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.830 06:23:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:31.830 06:23:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:33.205 0 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1823714 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1823714 ']' 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1823714 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1823714 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1823714' 00:27:33.205 killing process with pid 1823714 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1823714 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1823714 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:33.205 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:33.205 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:33.205 [2024-07-23 06:23:24.253494] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:33.205 [2024-07-23 06:23:24.253597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1823714 ] 00:27:33.205 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.205 [2024-07-23 06:23:24.286866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:33.205 [2024-07-23 06:23:24.314948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.205 [2024-07-23 06:23:24.401741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.205 [2024-07-23 06:23:25.015861] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 97a674f2-f2f5-4d8a-b62a-d700c554f901 already exists 00:27:33.205 [2024-07-23 06:23:25.015905] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:97a674f2-f2f5-4d8a-b62a-d700c554f901 alias for bdev NVMe1n1 00:27:33.205 [2024-07-23 06:23:25.015935] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:33.205 Running I/O for 1 seconds... 00:27:33.205 00:27:33.205 Latency(us) 00:27:33.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.205 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:33.205 NVMe0n1 : 1.00 18968.07 74.09 0.00 0.00 6738.10 4150.61 12233.39 00:27:33.205 =================================================================================================================== 00:27:33.206 Total : 18968.07 74.09 0.00 0.00 6738.10 4150.61 12233.39 00:27:33.206 Received shutdown signal, test time was about 1.000000 seconds 00:27:33.206 00:27:33.206 Latency(us) 00:27:33.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.206 =================================================================================================================== 00:27:33.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.206 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.206 rmmod nvme_tcp 00:27:33.206 rmmod nvme_fabrics 00:27:33.206 rmmod nvme_keyring 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1823573 ']' 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1823573 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1823573 ']' 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1823573 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1823573 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1823573' 00:27:33.206 killing process with pid 1823573 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1823573 00:27:33.206 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1823573 00:27:33.464 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.464 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:33.464 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:33.464 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.464 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.464 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.464 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.464 06:23:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:35.999 00:27:35.999 real 0m7.248s 00:27:35.999 user 0m10.848s 00:27:35.999 sys 0m2.361s 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:35.999 ************************************ 00:27:35.999 END TEST nvmf_multicontroller 00:27:35.999 ************************************ 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.999 ************************************ 00:27:35.999 START TEST nvmf_aer 00:27:35.999 ************************************ 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:35.999 * Looking for test storage... 00:27:35.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:35.999 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.000 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:37.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:37.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.902 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:37.903 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:37.903 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:27:37.903 00:27:37.903 --- 10.0.0.2 ping statistics --- 00:27:37.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.903 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:27:37.903 00:27:37.903 --- 10.0.0.1 ping statistics --- 00:27:37.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.903 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1825919 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1825919 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1825919 ']' 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.903 06:23:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:37.903 [2024-07-23 06:23:31.004861] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:37.903 [2024-07-23 06:23:31.004942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.903 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.903 [2024-07-23 06:23:31.053608] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:37.903 [2024-07-23 06:23:31.084188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.903 [2024-07-23 06:23:31.180931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.903 [2024-07-23 06:23:31.180999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.903 [2024-07-23 06:23:31.181016] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.903 [2024-07-23 06:23:31.181030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.903 [2024-07-23 06:23:31.181042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.903 [2024-07-23 06:23:31.181106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.903 [2024-07-23 06:23:31.181134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.903 [2024-07-23 06:23:31.181200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.903 [2024-07-23 06:23:31.181202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.162 [2024-07-23 06:23:31.342078] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.162 Malloc0 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.162 [2024-07-23 06:23:31.395641] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.162 [ 00:27:38.162 { 00:27:38.162 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:38.162 "subtype": "Discovery", 00:27:38.162 "listen_addresses": [], 00:27:38.162 "allow_any_host": true, 00:27:38.162 "hosts": [] 00:27:38.162 }, 00:27:38.162 { 00:27:38.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.162 "subtype": "NVMe", 00:27:38.162 "listen_addresses": [ 00:27:38.162 { 00:27:38.162 "trtype": "TCP", 00:27:38.162 "adrfam": "IPv4", 00:27:38.162 "traddr": "10.0.0.2", 00:27:38.162 "trsvcid": "4420" 00:27:38.162 } 00:27:38.162 ], 00:27:38.162 "allow_any_host": true, 00:27:38.162 "hosts": [], 00:27:38.162 "serial_number": "SPDK00000000000001", 00:27:38.162 "model_number": "SPDK bdev Controller", 00:27:38.162 "max_namespaces": 2, 00:27:38.162 "min_cntlid": 1, 00:27:38.162 "max_cntlid": 65519, 00:27:38.162 "namespaces": [ 00:27:38.162 { 00:27:38.162 "nsid": 1, 00:27:38.162 "bdev_name": "Malloc0", 00:27:38.162 "name": "Malloc0", 00:27:38.162 "nguid": "2E10B32DFDE349C8A4AAFE51BD9DA956", 00:27:38.162 "uuid": "2e10b32d-fde3-49c8-a4aa-fe51bd9da956" 00:27:38.162 } 00:27:38.162 ] 00:27:38.162 } 00:27:38.162 ] 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1825944 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:38.162 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:38.163 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:38.163 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.421 Malloc1 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.421 [ 00:27:38.421 { 00:27:38.421 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:38.421 "subtype": "Discovery", 00:27:38.421 "listen_addresses": [], 00:27:38.421 "allow_any_host": true, 00:27:38.421 "hosts": [] 00:27:38.421 }, 00:27:38.421 { 00:27:38.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.421 "subtype": "NVMe", 00:27:38.421 "listen_addresses": [ 00:27:38.421 { 00:27:38.421 "trtype": "TCP", 00:27:38.421 "adrfam": "IPv4", 00:27:38.421 "traddr": "10.0.0.2", 00:27:38.421 "trsvcid": "4420" 00:27:38.421 } 00:27:38.421 ], 00:27:38.421 "allow_any_host": true, 00:27:38.421 "hosts": [], 00:27:38.421 "serial_number": "SPDK00000000000001", 00:27:38.421 "model_number": "SPDK bdev Controller", 00:27:38.421 "max_namespaces": 2, 00:27:38.421 "min_cntlid": 1, 00:27:38.421 "max_cntlid": 65519, 00:27:38.421 "namespaces": [ 00:27:38.421 { 00:27:38.421 "nsid": 1, 00:27:38.421 "bdev_name": "Malloc0", 00:27:38.421 "name": "Malloc0", 00:27:38.421 "nguid": "2E10B32DFDE349C8A4AAFE51BD9DA956", 00:27:38.421 "uuid": "2e10b32d-fde3-49c8-a4aa-fe51bd9da956" 00:27:38.421 }, 00:27:38.421 { 00:27:38.421 "nsid": 2, 00:27:38.421 "bdev_name": "Malloc1", 00:27:38.421 "name": "Malloc1", 00:27:38.421 "nguid": "01169196A60F47058C1656006CF5F842", 00:27:38.421 "uuid": "01169196-a60f-4705-8c16-56006cf5f842" 00:27:38.421 } 00:27:38.421 ] 00:27:38.421 } 00:27:38.421 ] 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1825944 00:27:38.421 Asynchronous Event Request test 00:27:38.421 Attaching to 10.0.0.2 00:27:38.421 Attached to 10.0.0.2 00:27:38.421 Registering asynchronous event callbacks... 00:27:38.421 Starting namespace attribute notice tests for all controllers... 00:27:38.421 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:38.421 aer_cb - Changed Namespace 00:27:38.421 Cleaning up... 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.421 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:38.680 rmmod nvme_tcp 00:27:38.680 rmmod nvme_fabrics 00:27:38.680 rmmod nvme_keyring 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1825919 ']' 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1825919 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1825919 ']' 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1825919 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1825919 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1825919' 00:27:38.680 killing process with pid 1825919 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1825919 00:27:38.680 06:23:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1825919 00:27:38.940 06:23:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:38.940 06:23:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:38.940 06:23:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:38.940 06:23:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:38.940 06:23:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:38.940 06:23:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.940 06:23:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.940 06:23:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:40.850 00:27:40.850 real 0m5.260s 00:27:40.850 user 0m4.228s 00:27:40.850 sys 0m1.805s 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.850 ************************************ 00:27:40.850 END TEST nvmf_aer 00:27:40.850 ************************************ 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.850 ************************************ 00:27:40.850 START TEST nvmf_async_init 00:27:40.850 ************************************ 00:27:40.850 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:41.108 * Looking for test storage... 00:27:41.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=99531e2336594fa6968ad8971625158f 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:41.108 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:41.109 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:41.109 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.109 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.109 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.109 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:41.109 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:41.109 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:41.109 06:23:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:43.012 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:43.012 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:43.012 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:43.012 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:27:43.012 00:27:43.012 --- 10.0.0.2 ping statistics --- 00:27:43.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.012 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:43.012 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:27:43.012 00:27:43.012 --- 10.0.0.1 ping statistics --- 00:27:43.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.012 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.013 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1827883 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1827883 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1827883 ']' 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.271 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.271 [2024-07-23 06:23:36.401371] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:43.271 [2024-07-23 06:23:36.401459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.271 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.271 [2024-07-23 06:23:36.440567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:43.271 [2024-07-23 06:23:36.474382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.271 [2024-07-23 06:23:36.568931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.271 [2024-07-23 06:23:36.568996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.271 [2024-07-23 06:23:36.569023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.271 [2024-07-23 06:23:36.569037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.271 [2024-07-23 06:23:36.569048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.271 [2024-07-23 06:23:36.569078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.529 [2024-07-23 06:23:36.716123] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.529 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.530 null0 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 99531e2336594fa6968ad8971625158f 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.530 [2024-07-23 06:23:36.756407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.530 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.787 nvme0n1 00:27:43.787 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.787 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:43.787 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.787 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.787 [ 00:27:43.787 { 00:27:43.787 "name": "nvme0n1", 00:27:43.787 "aliases": [ 00:27:43.787 "99531e23-3659-4fa6-968a-d8971625158f" 00:27:43.787 ], 00:27:43.787 "product_name": "NVMe disk", 00:27:43.787 "block_size": 512, 00:27:43.787 "num_blocks": 2097152, 00:27:43.787 "uuid": "99531e23-3659-4fa6-968a-d8971625158f", 00:27:43.787 "assigned_rate_limits": { 00:27:43.787 "rw_ios_per_sec": 0, 00:27:43.787 "rw_mbytes_per_sec": 0, 00:27:43.787 "r_mbytes_per_sec": 0, 00:27:43.787 "w_mbytes_per_sec": 0 00:27:43.787 }, 00:27:43.787 "claimed": false, 00:27:43.787 "zoned": false, 00:27:43.787 "supported_io_types": { 00:27:43.787 "read": true, 00:27:43.787 "write": true, 00:27:43.787 "unmap": false, 00:27:43.787 "flush": true, 00:27:43.787 "reset": true, 00:27:43.787 "nvme_admin": true, 00:27:43.787 "nvme_io": true, 00:27:43.787 "nvme_io_md": false, 00:27:43.787 "write_zeroes": true, 00:27:43.787 "zcopy": false, 00:27:43.787 "get_zone_info": false, 00:27:43.787 "zone_management": false, 00:27:43.787 "zone_append": false, 00:27:43.787 "compare": true, 00:27:43.787 "compare_and_write": true, 00:27:43.787 "abort": true, 00:27:43.787 "seek_hole": false, 00:27:43.787 "seek_data": false, 00:27:43.787 "copy": true, 00:27:43.787 "nvme_iov_md": false 00:27:43.787 }, 00:27:43.787 "memory_domains": [ 00:27:43.787 { 00:27:43.787 "dma_device_id": "system", 00:27:43.787 "dma_device_type": 1 00:27:43.787 } 00:27:43.787 ], 00:27:43.787 "driver_specific": { 00:27:43.787 "nvme": [ 00:27:43.787 { 00:27:43.787 "trid": { 00:27:43.787 "trtype": "TCP", 00:27:43.787 "adrfam": "IPv4", 00:27:43.787 "traddr": "10.0.0.2", 00:27:43.787 "trsvcid": "4420", 00:27:43.787 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:43.787 }, 00:27:43.787 "ctrlr_data": { 00:27:43.787 "cntlid": 1, 00:27:43.787 "vendor_id": "0x8086", 00:27:43.787 "model_number": "SPDK bdev Controller", 00:27:43.787 "serial_number": "00000000000000000000", 00:27:43.787 "firmware_revision": "24.09", 00:27:43.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.788 "oacs": { 00:27:43.788 "security": 0, 00:27:43.788 "format": 0, 00:27:43.788 "firmware": 0, 00:27:43.788 "ns_manage": 0 00:27:43.788 }, 00:27:43.788 "multi_ctrlr": true, 00:27:43.788 "ana_reporting": false 00:27:43.788 }, 00:27:43.788 "vs": { 00:27:43.788 "nvme_version": "1.3" 00:27:43.788 }, 00:27:43.788 "ns_data": { 00:27:43.788 "id": 1, 00:27:43.788 "can_share": true 00:27:43.788 } 00:27:43.788 } 00:27:43.788 ], 00:27:43.788 "mp_policy": "active_passive" 00:27:43.788 } 00:27:43.788 } 00:27:43.788 ] 00:27:43.788 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.788 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:43.788 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.788 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.788 [2024-07-23 06:23:37.009469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:43.788 [2024-07-23 06:23:37.009559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203d850 (9): Bad file descriptor 00:27:44.046 [2024-07-23 06:23:37.151786] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.046 [ 00:27:44.046 { 00:27:44.046 "name": "nvme0n1", 00:27:44.046 "aliases": [ 00:27:44.046 "99531e23-3659-4fa6-968a-d8971625158f" 00:27:44.046 ], 00:27:44.046 "product_name": "NVMe disk", 00:27:44.046 "block_size": 512, 00:27:44.046 "num_blocks": 2097152, 00:27:44.046 "uuid": "99531e23-3659-4fa6-968a-d8971625158f", 00:27:44.046 "assigned_rate_limits": { 00:27:44.046 "rw_ios_per_sec": 0, 00:27:44.046 "rw_mbytes_per_sec": 0, 00:27:44.046 "r_mbytes_per_sec": 0, 00:27:44.046 "w_mbytes_per_sec": 0 00:27:44.046 }, 00:27:44.046 "claimed": false, 00:27:44.046 "zoned": false, 00:27:44.046 "supported_io_types": { 00:27:44.046 "read": true, 00:27:44.046 "write": true, 00:27:44.046 "unmap": false, 00:27:44.046 "flush": true, 00:27:44.046 "reset": true, 00:27:44.046 "nvme_admin": true, 00:27:44.046 "nvme_io": true, 00:27:44.046 "nvme_io_md": false, 00:27:44.046 "write_zeroes": true, 00:27:44.046 "zcopy": false, 00:27:44.046 "get_zone_info": false, 00:27:44.046 "zone_management": false, 00:27:44.046 "zone_append": false, 00:27:44.046 "compare": true, 00:27:44.046 "compare_and_write": true, 00:27:44.046 "abort": true, 00:27:44.046 "seek_hole": false, 00:27:44.046 "seek_data": false, 00:27:44.046 "copy": true, 00:27:44.046 "nvme_iov_md": false 00:27:44.046 }, 00:27:44.046 "memory_domains": [ 00:27:44.046 { 00:27:44.046 "dma_device_id": "system", 00:27:44.046 "dma_device_type": 1 00:27:44.046 } 00:27:44.046 ], 00:27:44.046 "driver_specific": { 00:27:44.046 "nvme": [ 00:27:44.046 { 00:27:44.046 "trid": { 00:27:44.046 "trtype": "TCP", 00:27:44.046 "adrfam": "IPv4", 00:27:44.046 "traddr": "10.0.0.2", 00:27:44.046 "trsvcid": "4420", 00:27:44.046 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:44.046 }, 00:27:44.046 "ctrlr_data": { 00:27:44.046 "cntlid": 2, 00:27:44.046 "vendor_id": "0x8086", 00:27:44.046 "model_number": "SPDK bdev Controller", 00:27:44.046 "serial_number": "00000000000000000000", 00:27:44.046 "firmware_revision": "24.09", 00:27:44.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.046 "oacs": { 00:27:44.046 "security": 0, 00:27:44.046 "format": 0, 00:27:44.046 "firmware": 0, 00:27:44.046 "ns_manage": 0 00:27:44.046 }, 00:27:44.046 "multi_ctrlr": true, 00:27:44.046 "ana_reporting": false 00:27:44.046 }, 00:27:44.046 "vs": { 00:27:44.046 "nvme_version": "1.3" 00:27:44.046 }, 00:27:44.046 "ns_data": { 00:27:44.046 "id": 1, 00:27:44.046 "can_share": true 00:27:44.046 } 00:27:44.046 } 00:27:44.046 ], 00:27:44.046 "mp_policy": "active_passive" 00:27:44.046 } 00:27:44.046 } 00:27:44.046 ] 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FUi3Rwjy6N 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FUi3Rwjy6N 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.046 [2024-07-23 06:23:37.202155] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:44.046 [2024-07-23 06:23:37.202281] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FUi3Rwjy6N 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.046 [2024-07-23 06:23:37.210180] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FUi3Rwjy6N 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.046 [2024-07-23 06:23:37.218207] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:44.046 [2024-07-23 06:23:37.218268] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:44.046 nvme0n1 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.046 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.046 [ 00:27:44.046 { 00:27:44.046 "name": "nvme0n1", 00:27:44.046 "aliases": [ 00:27:44.046 "99531e23-3659-4fa6-968a-d8971625158f" 00:27:44.046 ], 00:27:44.046 "product_name": "NVMe disk", 00:27:44.046 "block_size": 512, 00:27:44.046 "num_blocks": 2097152, 00:27:44.046 "uuid": "99531e23-3659-4fa6-968a-d8971625158f", 00:27:44.046 "assigned_rate_limits": { 00:27:44.046 "rw_ios_per_sec": 0, 00:27:44.046 "rw_mbytes_per_sec": 0, 00:27:44.046 "r_mbytes_per_sec": 0, 00:27:44.046 "w_mbytes_per_sec": 0 00:27:44.046 }, 00:27:44.046 "claimed": false, 00:27:44.046 "zoned": false, 00:27:44.046 "supported_io_types": { 00:27:44.046 "read": true, 00:27:44.046 "write": true, 00:27:44.046 "unmap": false, 00:27:44.046 "flush": true, 00:27:44.046 "reset": true, 00:27:44.046 "nvme_admin": true, 00:27:44.046 "nvme_io": true, 00:27:44.046 "nvme_io_md": false, 00:27:44.046 "write_zeroes": true, 00:27:44.046 "zcopy": false, 00:27:44.046 "get_zone_info": false, 00:27:44.046 "zone_management": false, 00:27:44.046 "zone_append": false, 00:27:44.046 "compare": true, 00:27:44.046 "compare_and_write": true, 00:27:44.046 "abort": true, 00:27:44.046 "seek_hole": false, 00:27:44.046 "seek_data": false, 00:27:44.046 "copy": true, 00:27:44.046 "nvme_iov_md": false 00:27:44.046 }, 00:27:44.046 "memory_domains": [ 00:27:44.047 { 00:27:44.047 "dma_device_id": "system", 00:27:44.047 "dma_device_type": 1 00:27:44.047 } 00:27:44.047 ], 00:27:44.047 "driver_specific": { 00:27:44.047 "nvme": [ 00:27:44.047 { 00:27:44.047 "trid": { 00:27:44.047 "trtype": "TCP", 00:27:44.047 "adrfam": "IPv4", 00:27:44.047 "traddr": "10.0.0.2", 00:27:44.047 "trsvcid": "4421", 00:27:44.047 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:44.047 }, 00:27:44.047 "ctrlr_data": { 00:27:44.047 "cntlid": 3, 00:27:44.047 "vendor_id": "0x8086", 00:27:44.047 "model_number": "SPDK bdev Controller", 00:27:44.047 "serial_number": "00000000000000000000", 00:27:44.047 "firmware_revision": "24.09", 00:27:44.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.047 "oacs": { 00:27:44.047 "security": 0, 00:27:44.047 "format": 0, 00:27:44.047 "firmware": 0, 00:27:44.047 "ns_manage": 0 00:27:44.047 }, 00:27:44.047 "multi_ctrlr": true, 00:27:44.047 "ana_reporting": false 00:27:44.047 }, 00:27:44.047 "vs": { 00:27:44.047 "nvme_version": "1.3" 00:27:44.047 }, 00:27:44.047 "ns_data": { 00:27:44.047 "id": 1, 00:27:44.047 "can_share": true 00:27:44.047 } 00:27:44.047 } 00:27:44.047 ], 00:27:44.047 "mp_policy": "active_passive" 00:27:44.047 } 00:27:44.047 } 00:27:44.047 ] 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.FUi3Rwjy6N 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.047 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.047 rmmod nvme_tcp 00:27:44.047 rmmod nvme_fabrics 00:27:44.047 rmmod nvme_keyring 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1827883 ']' 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1827883 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1827883 ']' 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1827883 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1827883 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1827883' 00:27:44.305 killing process with pid 1827883 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1827883 00:27:44.305 [2024-07-23 06:23:37.431837] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:44.305 [2024-07-23 06:23:37.431878] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1827883 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.305 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:46.837 00:27:46.837 real 0m5.513s 00:27:46.837 user 0m2.122s 00:27:46.837 sys 0m1.787s 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.837 ************************************ 00:27:46.837 END TEST nvmf_async_init 00:27:46.837 ************************************ 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.837 ************************************ 00:27:46.837 START TEST dma 00:27:46.837 ************************************ 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:46.837 * Looking for test storage... 00:27:46.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:46.837 00:27:46.837 real 0m0.056s 00:27:46.837 user 0m0.024s 00:27:46.837 sys 0m0.037s 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:46.837 ************************************ 00:27:46.837 END TEST dma 00:27:46.837 ************************************ 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.837 ************************************ 00:27:46.837 START TEST nvmf_identify 00:27:46.837 ************************************ 00:27:46.837 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:46.837 * Looking for test storage... 00:27:46.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:46.838 06:23:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:48.739 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:48.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:48.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:48.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:48.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:48.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:27:48.740 00:27:48.740 --- 10.0.0.2 ping statistics --- 00:27:48.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.740 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:27:48.740 00:27:48.740 --- 10.0.0.1 ping statistics --- 00:27:48.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.740 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1830010 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1830010 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1830010 ']' 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.740 06:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:48.740 [2024-07-23 06:23:41.991696] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:48.740 [2024-07-23 06:23:41.991784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.740 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.740 [2024-07-23 06:23:42.028088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:48.740 [2024-07-23 06:23:42.055007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:48.999 [2024-07-23 06:23:42.142761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.999 [2024-07-23 06:23:42.142813] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.999 [2024-07-23 06:23:42.142826] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:48.999 [2024-07-23 06:23:42.142838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:48.999 [2024-07-23 06:23:42.142848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.999 [2024-07-23 06:23:42.142918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.999 [2024-07-23 06:23:42.142999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.999 [2024-07-23 06:23:42.143061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.999 [2024-07-23 06:23:42.143063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:48.999 [2024-07-23 06:23:42.278919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:48.999 Malloc0 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.999 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:49.260 [2024-07-23 06:23:42.356184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:49.260 [ 00:27:49.260 { 00:27:49.260 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:49.260 "subtype": "Discovery", 00:27:49.260 "listen_addresses": [ 00:27:49.260 { 00:27:49.260 "trtype": "TCP", 00:27:49.260 "adrfam": "IPv4", 00:27:49.260 "traddr": "10.0.0.2", 00:27:49.260 "trsvcid": "4420" 00:27:49.260 } 00:27:49.260 ], 00:27:49.260 "allow_any_host": true, 00:27:49.260 "hosts": [] 00:27:49.260 }, 00:27:49.260 { 00:27:49.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.260 "subtype": "NVMe", 00:27:49.260 "listen_addresses": [ 00:27:49.260 { 00:27:49.260 "trtype": "TCP", 00:27:49.260 "adrfam": "IPv4", 00:27:49.260 "traddr": "10.0.0.2", 00:27:49.260 "trsvcid": "4420" 00:27:49.260 } 00:27:49.260 ], 00:27:49.260 "allow_any_host": true, 00:27:49.260 "hosts": [], 00:27:49.260 "serial_number": "SPDK00000000000001", 00:27:49.260 "model_number": "SPDK bdev Controller", 00:27:49.260 "max_namespaces": 32, 00:27:49.260 "min_cntlid": 1, 00:27:49.260 "max_cntlid": 65519, 00:27:49.260 "namespaces": [ 00:27:49.260 { 00:27:49.260 "nsid": 1, 00:27:49.260 "bdev_name": "Malloc0", 00:27:49.260 "name": "Malloc0", 00:27:49.260 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:49.260 "eui64": "ABCDEF0123456789", 00:27:49.260 "uuid": "33b9fbce-ca25-42c0-a9ae-6fe36cba29f8" 00:27:49.260 } 00:27:49.260 ] 00:27:49.260 } 00:27:49.260 ] 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.260 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:49.260 [2024-07-23 06:23:42.397866] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:49.260 [2024-07-23 06:23:42.397910] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830147 ] 00:27:49.260 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.260 [2024-07-23 06:23:42.416361] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:49.260 [2024-07-23 06:23:42.434221] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:49.260 [2024-07-23 06:23:42.434281] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:49.260 [2024-07-23 06:23:42.434291] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:49.260 [2024-07-23 06:23:42.434305] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:49.260 [2024-07-23 06:23:42.434320] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:49.260 [2024-07-23 06:23:42.434611] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:49.260 [2024-07-23 06:23:42.434680] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f68630 0 00:27:49.260 [2024-07-23 06:23:42.440632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:49.260 [2024-07-23 06:23:42.440653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:49.260 [2024-07-23 06:23:42.440662] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:49.260 [2024-07-23 06:23:42.440668] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:49.260 [2024-07-23 06:23:42.440723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.440736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.440743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.260 [2024-07-23 06:23:42.440762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:49.260 [2024-07-23 06:23:42.440789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.260 [2024-07-23 06:23:42.447626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.260 [2024-07-23 06:23:42.447645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.260 [2024-07-23 06:23:42.447652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.447660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.260 [2024-07-23 06:23:42.447681] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:49.260 [2024-07-23 06:23:42.447692] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:49.260 [2024-07-23 06:23:42.447701] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:49.260 [2024-07-23 06:23:42.447725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.447734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.447745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.260 [2024-07-23 06:23:42.447757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.260 [2024-07-23 06:23:42.447781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.260 [2024-07-23 06:23:42.447980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.260 [2024-07-23 06:23:42.447996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.260 [2024-07-23 06:23:42.448003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.448010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.260 [2024-07-23 06:23:42.448024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:49.260 [2024-07-23 06:23:42.448039] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:49.260 [2024-07-23 06:23:42.448051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.448059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.448066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.260 [2024-07-23 06:23:42.448076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.260 [2024-07-23 06:23:42.448098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.260 [2024-07-23 06:23:42.448245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.260 [2024-07-23 06:23:42.448261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.260 [2024-07-23 06:23:42.448268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.448275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.260 [2024-07-23 06:23:42.448284] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:49.260 [2024-07-23 06:23:42.448298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:49.260 [2024-07-23 06:23:42.448310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.448318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.260 [2024-07-23 06:23:42.448324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.448335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.261 [2024-07-23 06:23:42.448356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.261 [2024-07-23 06:23:42.448501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.261 [2024-07-23 06:23:42.448516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.261 [2024-07-23 06:23:42.448523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.448530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.261 [2024-07-23 06:23:42.448539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:49.261 [2024-07-23 06:23:42.448556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.448565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.448571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.448582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.261 [2024-07-23 06:23:42.448608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.261 [2024-07-23 06:23:42.448764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.261 [2024-07-23 06:23:42.448780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.261 [2024-07-23 06:23:42.448787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.448793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.261 [2024-07-23 06:23:42.448802] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:49.261 [2024-07-23 06:23:42.448810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:49.261 [2024-07-23 06:23:42.448823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:49.261 [2024-07-23 06:23:42.448934] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:49.261 [2024-07-23 06:23:42.448942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:49.261 [2024-07-23 06:23:42.448955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.448963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.448969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.448980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.261 [2024-07-23 06:23:42.449002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.261 [2024-07-23 06:23:42.449199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.261 [2024-07-23 06:23:42.449215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.261 [2024-07-23 06:23:42.449222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.449228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.261 [2024-07-23 06:23:42.449237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:49.261 [2024-07-23 06:23:42.449253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.449263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.449269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.449279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.261 [2024-07-23 06:23:42.449300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.261 [2024-07-23 06:23:42.449495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.261 [2024-07-23 06:23:42.449511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.261 [2024-07-23 06:23:42.449518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.449525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.261 [2024-07-23 06:23:42.449533] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:49.261 [2024-07-23 06:23:42.449541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:49.261 [2024-07-23 06:23:42.449554] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:49.261 [2024-07-23 06:23:42.449573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:49.261 [2024-07-23 06:23:42.449589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.449596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.449607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.261 [2024-07-23 06:23:42.449637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.261 [2024-07-23 06:23:42.449823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.261 [2024-07-23 06:23:42.449839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.261 [2024-07-23 06:23:42.449846] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.449853] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f68630): datao=0, datal=4096, cccid=0 00:27:49.261 [2024-07-23 06:23:42.449861] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb6f80) on tqpair(0x1f68630): expected_datao=0, payload_size=4096 00:27:49.261 [2024-07-23 06:23:42.449869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.449898] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.449914] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.261 [2024-07-23 06:23:42.450062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.261 [2024-07-23 06:23:42.450069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.261 [2024-07-23 06:23:42.450092] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:49.261 [2024-07-23 06:23:42.450102] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:49.261 [2024-07-23 06:23:42.450110] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:49.261 [2024-07-23 06:23:42.450119] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:49.261 [2024-07-23 06:23:42.450127] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:49.261 [2024-07-23 06:23:42.450135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:49.261 [2024-07-23 06:23:42.450148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:49.261 [2024-07-23 06:23:42.450161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.450185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:49.261 [2024-07-23 06:23:42.450206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.261 [2024-07-23 06:23:42.450362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.261 [2024-07-23 06:23:42.450378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.261 [2024-07-23 06:23:42.450385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.261 [2024-07-23 06:23:42.450407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.450432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.261 [2024-07-23 06:23:42.450442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.450464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.261 [2024-07-23 06:23:42.450474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.450495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.261 [2024-07-23 06:23:42.450505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.261 [2024-07-23 06:23:42.450517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f68630) 00:27:49.261 [2024-07-23 06:23:42.450526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.261 [2024-07-23 06:23:42.450535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:49.261 [2024-07-23 06:23:42.450554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:49.261 [2024-07-23 06:23:42.450567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.450574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f68630) 00:27:49.262 [2024-07-23 06:23:42.450584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.262 [2024-07-23 06:23:42.450607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6f80, cid 0, qid 0 00:27:49.262 [2024-07-23 06:23:42.450631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7100, cid 1, qid 0 00:27:49.262 [2024-07-23 06:23:42.450641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7280, cid 2, qid 0 00:27:49.262 [2024-07-23 06:23:42.450649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7400, cid 3, qid 0 00:27:49.262 [2024-07-23 06:23:42.450656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7580, cid 4, qid 0 00:27:49.262 [2024-07-23 06:23:42.450844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.262 [2024-07-23 06:23:42.450859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.262 [2024-07-23 06:23:42.450866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.450873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7580) on tqpair=0x1f68630 00:27:49.262 [2024-07-23 06:23:42.450882] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:49.262 [2024-07-23 06:23:42.450891] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:49.262 [2024-07-23 06:23:42.450908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.450922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f68630) 00:27:49.262 [2024-07-23 06:23:42.450933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.262 [2024-07-23 06:23:42.450954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7580, cid 4, qid 0 00:27:49.262 [2024-07-23 06:23:42.451123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.262 [2024-07-23 06:23:42.451143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.262 [2024-07-23 06:23:42.451151] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.451158] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f68630): datao=0, datal=4096, cccid=4 00:27:49.262 [2024-07-23 06:23:42.451165] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb7580) on tqpair(0x1f68630): expected_datao=0, payload_size=4096 00:27:49.262 [2024-07-23 06:23:42.451173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.451183] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.451190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.451259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.262 [2024-07-23 06:23:42.451270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.262 [2024-07-23 06:23:42.451276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.451283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7580) on tqpair=0x1f68630 00:27:49.262 [2024-07-23 06:23:42.451301] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:49.262 [2024-07-23 06:23:42.451338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.451349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f68630) 00:27:49.262 [2024-07-23 06:23:42.451360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.262 [2024-07-23 06:23:42.451371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.451378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.451384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f68630) 00:27:49.262 [2024-07-23 06:23:42.451393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.262 [2024-07-23 06:23:42.451420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7580, cid 4, qid 0 00:27:49.262 [2024-07-23 06:23:42.451437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7700, cid 5, qid 0 00:27:49.262 [2024-07-23 06:23:42.455627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.262 [2024-07-23 06:23:42.455653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.262 [2024-07-23 06:23:42.455661] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.455668] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f68630): datao=0, datal=1024, cccid=4 00:27:49.262 [2024-07-23 06:23:42.455675] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb7580) on tqpair(0x1f68630): expected_datao=0, payload_size=1024 00:27:49.262 [2024-07-23 06:23:42.455683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.455693] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.455700] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.455709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.262 [2024-07-23 06:23:42.455718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.262 [2024-07-23 06:23:42.455730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.455738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7700) on tqpair=0x1f68630 00:27:49.262 [2024-07-23 06:23:42.492753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.262 [2024-07-23 06:23:42.492775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.262 [2024-07-23 06:23:42.492784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.492791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7580) on tqpair=0x1f68630 00:27:49.262 [2024-07-23 06:23:42.492809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.492819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f68630) 00:27:49.262 [2024-07-23 06:23:42.492830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.262 [2024-07-23 06:23:42.492861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7580, cid 4, qid 0 00:27:49.262 [2024-07-23 06:23:42.493027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.262 [2024-07-23 06:23:42.493048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.262 [2024-07-23 06:23:42.493056] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493063] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f68630): datao=0, datal=3072, cccid=4 00:27:49.262 [2024-07-23 06:23:42.493070] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb7580) on tqpair(0x1f68630): expected_datao=0, payload_size=3072 00:27:49.262 [2024-07-23 06:23:42.493078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493088] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493096] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.262 [2024-07-23 06:23:42.493142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.262 [2024-07-23 06:23:42.493149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7580) on tqpair=0x1f68630 00:27:49.262 [2024-07-23 06:23:42.493171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f68630) 00:27:49.262 [2024-07-23 06:23:42.493190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.262 [2024-07-23 06:23:42.493219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7580, cid 4, qid 0 00:27:49.262 [2024-07-23 06:23:42.493378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.262 [2024-07-23 06:23:42.493393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.262 [2024-07-23 06:23:42.493399] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493406] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f68630): datao=0, datal=8, cccid=4 00:27:49.262 [2024-07-23 06:23:42.493413] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb7580) on tqpair(0x1f68630): expected_datao=0, payload_size=8 00:27:49.262 [2024-07-23 06:23:42.493421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493430] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.493438] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.533752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.262 [2024-07-23 06:23:42.533775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.262 [2024-07-23 06:23:42.533784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.262 [2024-07-23 06:23:42.533796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7580) on tqpair=0x1f68630 00:27:49.262 ===================================================== 00:27:49.262 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:49.262 ===================================================== 00:27:49.262 Controller Capabilities/Features 00:27:49.262 ================================ 00:27:49.262 Vendor ID: 0000 00:27:49.262 Subsystem Vendor ID: 0000 00:27:49.262 Serial Number: .................... 00:27:49.262 Model Number: ........................................ 00:27:49.262 Firmware Version: 24.09 00:27:49.262 Recommended Arb Burst: 0 00:27:49.262 IEEE OUI Identifier: 00 00 00 00:27:49.262 Multi-path I/O 00:27:49.262 May have multiple subsystem ports: No 00:27:49.262 May have multiple controllers: No 00:27:49.262 Associated with SR-IOV VF: No 00:27:49.262 Max Data Transfer Size: 131072 00:27:49.262 Max Number of Namespaces: 0 00:27:49.262 Max Number of I/O Queues: 1024 00:27:49.263 NVMe Specification Version (VS): 1.3 00:27:49.263 NVMe Specification Version (Identify): 1.3 00:27:49.263 Maximum Queue Entries: 128 00:27:49.263 Contiguous Queues Required: Yes 00:27:49.263 Arbitration Mechanisms Supported 00:27:49.263 Weighted Round Robin: Not Supported 00:27:49.263 Vendor Specific: Not Supported 00:27:49.263 Reset Timeout: 15000 ms 00:27:49.263 Doorbell Stride: 4 bytes 00:27:49.263 NVM Subsystem Reset: Not Supported 00:27:49.263 Command Sets Supported 00:27:49.263 NVM Command Set: Supported 00:27:49.263 Boot Partition: Not Supported 00:27:49.263 Memory Page Size Minimum: 4096 bytes 00:27:49.263 Memory Page Size Maximum: 4096 bytes 00:27:49.263 Persistent Memory Region: Not Supported 00:27:49.263 Optional Asynchronous Events Supported 00:27:49.263 Namespace Attribute Notices: Not Supported 00:27:49.263 Firmware Activation Notices: Not Supported 00:27:49.263 ANA Change Notices: Not Supported 00:27:49.263 PLE Aggregate Log Change Notices: Not Supported 00:27:49.263 LBA Status Info Alert Notices: Not Supported 00:27:49.263 EGE Aggregate Log Change Notices: Not Supported 00:27:49.263 Normal NVM Subsystem Shutdown event: Not Supported 00:27:49.263 Zone Descriptor Change Notices: Not Supported 00:27:49.263 Discovery Log Change Notices: Supported 00:27:49.263 Controller Attributes 00:27:49.263 128-bit Host Identifier: Not Supported 00:27:49.263 Non-Operational Permissive Mode: Not Supported 00:27:49.263 NVM Sets: Not Supported 00:27:49.263 Read Recovery Levels: Not Supported 00:27:49.263 Endurance Groups: Not Supported 00:27:49.263 Predictable Latency Mode: Not Supported 00:27:49.263 Traffic Based Keep ALive: Not Supported 00:27:49.263 Namespace Granularity: Not Supported 00:27:49.263 SQ Associations: Not Supported 00:27:49.263 UUID List: Not Supported 00:27:49.263 Multi-Domain Subsystem: Not Supported 00:27:49.263 Fixed Capacity Management: Not Supported 00:27:49.263 Variable Capacity Management: Not Supported 00:27:49.263 Delete Endurance Group: Not Supported 00:27:49.263 Delete NVM Set: Not Supported 00:27:49.263 Extended LBA Formats Supported: Not Supported 00:27:49.263 Flexible Data Placement Supported: Not Supported 00:27:49.263 00:27:49.263 Controller Memory Buffer Support 00:27:49.263 ================================ 00:27:49.263 Supported: No 00:27:49.263 00:27:49.263 Persistent Memory Region Support 00:27:49.263 ================================ 00:27:49.263 Supported: No 00:27:49.263 00:27:49.263 Admin Command Set Attributes 00:27:49.263 ============================ 00:27:49.263 Security Send/Receive: Not Supported 00:27:49.263 Format NVM: Not Supported 00:27:49.263 Firmware Activate/Download: Not Supported 00:27:49.263 Namespace Management: Not Supported 00:27:49.263 Device Self-Test: Not Supported 00:27:49.263 Directives: Not Supported 00:27:49.263 NVMe-MI: Not Supported 00:27:49.263 Virtualization Management: Not Supported 00:27:49.263 Doorbell Buffer Config: Not Supported 00:27:49.263 Get LBA Status Capability: Not Supported 00:27:49.263 Command & Feature Lockdown Capability: Not Supported 00:27:49.263 Abort Command Limit: 1 00:27:49.263 Async Event Request Limit: 4 00:27:49.263 Number of Firmware Slots: N/A 00:27:49.263 Firmware Slot 1 Read-Only: N/A 00:27:49.263 Firmware Activation Without Reset: N/A 00:27:49.263 Multiple Update Detection Support: N/A 00:27:49.263 Firmware Update Granularity: No Information Provided 00:27:49.263 Per-Namespace SMART Log: No 00:27:49.263 Asymmetric Namespace Access Log Page: Not Supported 00:27:49.263 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:49.263 Command Effects Log Page: Not Supported 00:27:49.263 Get Log Page Extended Data: Supported 00:27:49.263 Telemetry Log Pages: Not Supported 00:27:49.263 Persistent Event Log Pages: Not Supported 00:27:49.263 Supported Log Pages Log Page: May Support 00:27:49.263 Commands Supported & Effects Log Page: Not Supported 00:27:49.263 Feature Identifiers & Effects Log Page:May Support 00:27:49.263 NVMe-MI Commands & Effects Log Page: May Support 00:27:49.263 Data Area 4 for Telemetry Log: Not Supported 00:27:49.263 Error Log Page Entries Supported: 128 00:27:49.263 Keep Alive: Not Supported 00:27:49.263 00:27:49.263 NVM Command Set Attributes 00:27:49.263 ========================== 00:27:49.263 Submission Queue Entry Size 00:27:49.263 Max: 1 00:27:49.263 Min: 1 00:27:49.263 Completion Queue Entry Size 00:27:49.263 Max: 1 00:27:49.263 Min: 1 00:27:49.263 Number of Namespaces: 0 00:27:49.263 Compare Command: Not Supported 00:27:49.263 Write Uncorrectable Command: Not Supported 00:27:49.263 Dataset Management Command: Not Supported 00:27:49.263 Write Zeroes Command: Not Supported 00:27:49.263 Set Features Save Field: Not Supported 00:27:49.263 Reservations: Not Supported 00:27:49.263 Timestamp: Not Supported 00:27:49.263 Copy: Not Supported 00:27:49.263 Volatile Write Cache: Not Present 00:27:49.263 Atomic Write Unit (Normal): 1 00:27:49.263 Atomic Write Unit (PFail): 1 00:27:49.263 Atomic Compare & Write Unit: 1 00:27:49.263 Fused Compare & Write: Supported 00:27:49.263 Scatter-Gather List 00:27:49.263 SGL Command Set: Supported 00:27:49.263 SGL Keyed: Supported 00:27:49.263 SGL Bit Bucket Descriptor: Not Supported 00:27:49.263 SGL Metadata Pointer: Not Supported 00:27:49.263 Oversized SGL: Not Supported 00:27:49.263 SGL Metadata Address: Not Supported 00:27:49.263 SGL Offset: Supported 00:27:49.263 Transport SGL Data Block: Not Supported 00:27:49.263 Replay Protected Memory Block: Not Supported 00:27:49.263 00:27:49.263 Firmware Slot Information 00:27:49.263 ========================= 00:27:49.263 Active slot: 0 00:27:49.263 00:27:49.263 00:27:49.263 Error Log 00:27:49.263 ========= 00:27:49.263 00:27:49.263 Active Namespaces 00:27:49.263 ================= 00:27:49.263 Discovery Log Page 00:27:49.263 ================== 00:27:49.263 Generation Counter: 2 00:27:49.263 Number of Records: 2 00:27:49.263 Record Format: 0 00:27:49.263 00:27:49.263 Discovery Log Entry 0 00:27:49.263 ---------------------- 00:27:49.263 Transport Type: 3 (TCP) 00:27:49.263 Address Family: 1 (IPv4) 00:27:49.263 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:49.263 Entry Flags: 00:27:49.263 Duplicate Returned Information: 1 00:27:49.263 Explicit Persistent Connection Support for Discovery: 1 00:27:49.263 Transport Requirements: 00:27:49.263 Secure Channel: Not Required 00:27:49.263 Port ID: 0 (0x0000) 00:27:49.263 Controller ID: 65535 (0xffff) 00:27:49.263 Admin Max SQ Size: 128 00:27:49.263 Transport Service Identifier: 4420 00:27:49.263 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:49.263 Transport Address: 10.0.0.2 00:27:49.263 Discovery Log Entry 1 00:27:49.263 ---------------------- 00:27:49.263 Transport Type: 3 (TCP) 00:27:49.263 Address Family: 1 (IPv4) 00:27:49.263 Subsystem Type: 2 (NVM Subsystem) 00:27:49.263 Entry Flags: 00:27:49.263 Duplicate Returned Information: 0 00:27:49.263 Explicit Persistent Connection Support for Discovery: 0 00:27:49.263 Transport Requirements: 00:27:49.263 Secure Channel: Not Required 00:27:49.263 Port ID: 0 (0x0000) 00:27:49.263 Controller ID: 65535 (0xffff) 00:27:49.263 Admin Max SQ Size: 128 00:27:49.263 Transport Service Identifier: 4420 00:27:49.263 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:49.263 Transport Address: 10.0.0.2 [2024-07-23 06:23:42.533922] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:49.263 [2024-07-23 06:23:42.533944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6f80) on tqpair=0x1f68630 00:27:49.263 [2024-07-23 06:23:42.533956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.263 [2024-07-23 06:23:42.533965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7100) on tqpair=0x1f68630 00:27:49.263 [2024-07-23 06:23:42.533973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.263 [2024-07-23 06:23:42.533981] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7280) on tqpair=0x1f68630 00:27:49.263 [2024-07-23 06:23:42.533988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.263 [2024-07-23 06:23:42.533996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7400) on tqpair=0x1f68630 00:27:49.263 [2024-07-23 06:23:42.534004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.263 [2024-07-23 06:23:42.534037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.263 [2024-07-23 06:23:42.534046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.263 [2024-07-23 06:23:42.534053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f68630) 00:27:49.264 [2024-07-23 06:23:42.534064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.264 [2024-07-23 06:23:42.534088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7400, cid 3, qid 0 00:27:49.264 [2024-07-23 06:23:42.534243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.264 [2024-07-23 06:23:42.534257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.264 [2024-07-23 06:23:42.534264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.534271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7400) on tqpair=0x1f68630 00:27:49.264 [2024-07-23 06:23:42.534283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.534291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.534297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f68630) 00:27:49.264 [2024-07-23 06:23:42.534308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.264 [2024-07-23 06:23:42.534334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7400, cid 3, qid 0 00:27:49.264 [2024-07-23 06:23:42.534498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.264 [2024-07-23 06:23:42.534514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.264 [2024-07-23 06:23:42.534521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.534528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7400) on tqpair=0x1f68630 00:27:49.264 [2024-07-23 06:23:42.534536] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:49.264 [2024-07-23 06:23:42.534544] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:49.264 [2024-07-23 06:23:42.534560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.534569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.534576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f68630) 00:27:49.264 [2024-07-23 06:23:42.534586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.264 [2024-07-23 06:23:42.538619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7400, cid 3, qid 0 00:27:49.264 [2024-07-23 06:23:42.538641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.264 [2024-07-23 06:23:42.538652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.264 [2024-07-23 06:23:42.538659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.538665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7400) on tqpair=0x1f68630 00:27:49.264 [2024-07-23 06:23:42.538683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.538708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.538715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f68630) 00:27:49.264 [2024-07-23 06:23:42.538726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.264 [2024-07-23 06:23:42.538749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb7400, cid 3, qid 0 00:27:49.264 [2024-07-23 06:23:42.538915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.264 [2024-07-23 06:23:42.538929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.264 [2024-07-23 06:23:42.538936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.264 [2024-07-23 06:23:42.538942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb7400) on tqpair=0x1f68630 00:27:49.264 [2024-07-23 06:23:42.538955] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:27:49.264 00:27:49.264 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:49.264 [2024-07-23 06:23:42.570311] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:49.264 [2024-07-23 06:23:42.570355] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830150 ] 00:27:49.264 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.264 [2024-07-23 06:23:42.588028] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:49.526 [2024-07-23 06:23:42.605724] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:49.526 [2024-07-23 06:23:42.605773] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:49.526 [2024-07-23 06:23:42.605783] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:49.526 [2024-07-23 06:23:42.605799] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:49.526 [2024-07-23 06:23:42.605812] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:49.526 [2024-07-23 06:23:42.606038] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:49.526 [2024-07-23 06:23:42.606082] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x122e630 0 00:27:49.526 [2024-07-23 06:23:42.616627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:49.526 [2024-07-23 06:23:42.616647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:49.526 [2024-07-23 06:23:42.616664] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:49.526 [2024-07-23 06:23:42.616671] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:49.526 [2024-07-23 06:23:42.616717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.616731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.616739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.526 [2024-07-23 06:23:42.616753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:49.526 [2024-07-23 06:23:42.616780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.526 [2024-07-23 06:23:42.624632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.526 [2024-07-23 06:23:42.624650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.526 [2024-07-23 06:23:42.624658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.624680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.526 [2024-07-23 06:23:42.624700] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:49.526 [2024-07-23 06:23:42.624713] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:49.526 [2024-07-23 06:23:42.624722] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:49.526 [2024-07-23 06:23:42.624742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.624754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.624762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.526 [2024-07-23 06:23:42.624773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.526 [2024-07-23 06:23:42.624798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.526 [2024-07-23 06:23:42.625016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.526 [2024-07-23 06:23:42.625033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.526 [2024-07-23 06:23:42.625040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.526 [2024-07-23 06:23:42.625060] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:49.526 [2024-07-23 06:23:42.625076] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:49.526 [2024-07-23 06:23:42.625091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.526 [2024-07-23 06:23:42.625134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.526 [2024-07-23 06:23:42.625157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.526 [2024-07-23 06:23:42.625403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.526 [2024-07-23 06:23:42.625420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.526 [2024-07-23 06:23:42.625427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.526 [2024-07-23 06:23:42.625443] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:49.526 [2024-07-23 06:23:42.625459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:49.526 [2024-07-23 06:23:42.625478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.526 [2024-07-23 06:23:42.625504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.526 [2024-07-23 06:23:42.625541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.526 [2024-07-23 06:23:42.625772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.526 [2024-07-23 06:23:42.625789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.526 [2024-07-23 06:23:42.625797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.526 [2024-07-23 06:23:42.625812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:49.526 [2024-07-23 06:23:42.625832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.625849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.526 [2024-07-23 06:23:42.625860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.526 [2024-07-23 06:23:42.625882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.526 [2024-07-23 06:23:42.626035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.526 [2024-07-23 06:23:42.626052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.526 [2024-07-23 06:23:42.626059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.526 [2024-07-23 06:23:42.626066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.526 [2024-07-23 06:23:42.626073] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:49.526 [2024-07-23 06:23:42.626082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:49.526 [2024-07-23 06:23:42.626096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:49.527 [2024-07-23 06:23:42.626209] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:49.527 [2024-07-23 06:23:42.626217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:49.527 [2024-07-23 06:23:42.626229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.626251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.626257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.626267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.527 [2024-07-23 06:23:42.626289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.527 [2024-07-23 06:23:42.626534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.527 [2024-07-23 06:23:42.626551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.527 [2024-07-23 06:23:42.626558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.626565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.527 [2024-07-23 06:23:42.626573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:49.527 [2024-07-23 06:23:42.626596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.626608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.626623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.626634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.527 [2024-07-23 06:23:42.626657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.527 [2024-07-23 06:23:42.626835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.527 [2024-07-23 06:23:42.626852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.527 [2024-07-23 06:23:42.626859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.626866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.527 [2024-07-23 06:23:42.626874] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:49.527 [2024-07-23 06:23:42.626882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.626897] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:49.527 [2024-07-23 06:23:42.626914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.626928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.626936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.626961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.527 [2024-07-23 06:23:42.626983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.527 [2024-07-23 06:23:42.627252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.527 [2024-07-23 06:23:42.627273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.527 [2024-07-23 06:23:42.627285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.627296] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122e630): datao=0, datal=4096, cccid=0 00:27:49.527 [2024-07-23 06:23:42.627308] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127cf80) on tqpair(0x122e630): expected_datao=0, payload_size=4096 00:27:49.527 [2024-07-23 06:23:42.627319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.627344] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.627357] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.667803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.527 [2024-07-23 06:23:42.667823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.527 [2024-07-23 06:23:42.667835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.667843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.527 [2024-07-23 06:23:42.667859] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:49.527 [2024-07-23 06:23:42.667869] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:49.527 [2024-07-23 06:23:42.667876] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:49.527 [2024-07-23 06:23:42.667883] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:49.527 [2024-07-23 06:23:42.667891] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:49.527 [2024-07-23 06:23:42.667902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.667919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.667934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.667942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.667949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.667961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:49.527 [2024-07-23 06:23:42.667986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.527 [2024-07-23 06:23:42.668155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.527 [2024-07-23 06:23:42.668172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.527 [2024-07-23 06:23:42.668179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.527 [2024-07-23 06:23:42.668197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.668222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.527 [2024-07-23 06:23:42.668232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.668255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.527 [2024-07-23 06:23:42.668279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.668301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.527 [2024-07-23 06:23:42.668310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.668330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.527 [2024-07-23 06:23:42.668339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.668359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.668373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.668380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.668391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.527 [2024-07-23 06:23:42.668416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127cf80, cid 0, qid 0 00:27:49.527 [2024-07-23 06:23:42.668443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d100, cid 1, qid 0 00:27:49.527 [2024-07-23 06:23:42.668451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d280, cid 2, qid 0 00:27:49.527 [2024-07-23 06:23:42.668458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.527 [2024-07-23 06:23:42.668465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d580, cid 4, qid 0 00:27:49.527 [2024-07-23 06:23:42.672643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.527 [2024-07-23 06:23:42.672660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.527 [2024-07-23 06:23:42.672667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.672674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d580) on tqpair=0x122e630 00:27:49.527 [2024-07-23 06:23:42.672682] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:49.527 [2024-07-23 06:23:42.672691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.672706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.672720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:49.527 [2024-07-23 06:23:42.672731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.672739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.527 [2024-07-23 06:23:42.672745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122e630) 00:27:49.527 [2024-07-23 06:23:42.672756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:49.527 [2024-07-23 06:23:42.672778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d580, cid 4, qid 0 00:27:49.527 [2024-07-23 06:23:42.672967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.527 [2024-07-23 06:23:42.672983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.527 [2024-07-23 06:23:42.672991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d580) on tqpair=0x122e630 00:27:49.528 [2024-07-23 06:23:42.673071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.673106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.673122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122e630) 00:27:49.528 [2024-07-23 06:23:42.673140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.528 [2024-07-23 06:23:42.673177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d580, cid 4, qid 0 00:27:49.528 [2024-07-23 06:23:42.673429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.528 [2024-07-23 06:23:42.673445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.528 [2024-07-23 06:23:42.673453] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673459] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122e630): datao=0, datal=4096, cccid=4 00:27:49.528 [2024-07-23 06:23:42.673471] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127d580) on tqpair(0x122e630): expected_datao=0, payload_size=4096 00:27:49.528 [2024-07-23 06:23:42.673488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673506] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673518] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.528 [2024-07-23 06:23:42.673578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.528 [2024-07-23 06:23:42.673585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d580) on tqpair=0x122e630 00:27:49.528 [2024-07-23 06:23:42.673607] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:49.528 [2024-07-23 06:23:42.673635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.673655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.673672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122e630) 00:27:49.528 [2024-07-23 06:23:42.673690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.528 [2024-07-23 06:23:42.673713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d580, cid 4, qid 0 00:27:49.528 [2024-07-23 06:23:42.673928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.528 [2024-07-23 06:23:42.673945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.528 [2024-07-23 06:23:42.673952] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.673962] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122e630): datao=0, datal=4096, cccid=4 00:27:49.528 [2024-07-23 06:23:42.673975] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127d580) on tqpair(0x122e630): expected_datao=0, payload_size=4096 00:27:49.528 [2024-07-23 06:23:42.673986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674001] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674014] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.528 [2024-07-23 06:23:42.674072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.528 [2024-07-23 06:23:42.674079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d580) on tqpair=0x122e630 00:27:49.528 [2024-07-23 06:23:42.674107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674127] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122e630) 00:27:49.528 [2024-07-23 06:23:42.674162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.528 [2024-07-23 06:23:42.674184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d580, cid 4, qid 0 00:27:49.528 [2024-07-23 06:23:42.674395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.528 [2024-07-23 06:23:42.674411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.528 [2024-07-23 06:23:42.674419] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674430] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122e630): datao=0, datal=4096, cccid=4 00:27:49.528 [2024-07-23 06:23:42.674444] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127d580) on tqpair(0x122e630): expected_datao=0, payload_size=4096 00:27:49.528 [2024-07-23 06:23:42.674456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674471] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674483] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.528 [2024-07-23 06:23:42.674543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.528 [2024-07-23 06:23:42.674550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d580) on tqpair=0x122e630 00:27:49.528 [2024-07-23 06:23:42.674570] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674654] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:49.528 [2024-07-23 06:23:42.674662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:49.528 [2024-07-23 06:23:42.674671] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:49.528 [2024-07-23 06:23:42.674705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122e630) 00:27:49.528 [2024-07-23 06:23:42.674725] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.528 [2024-07-23 06:23:42.674736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.674749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122e630) 00:27:49.528 [2024-07-23 06:23:42.674758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.528 [2024-07-23 06:23:42.674784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d580, cid 4, qid 0 00:27:49.528 [2024-07-23 06:23:42.674810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d700, cid 5, qid 0 00:27:49.528 [2024-07-23 06:23:42.675018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.528 [2024-07-23 06:23:42.675035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.528 [2024-07-23 06:23:42.675042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.675049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d580) on tqpair=0x122e630 00:27:49.528 [2024-07-23 06:23:42.675061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.528 [2024-07-23 06:23:42.675074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.528 [2024-07-23 06:23:42.675082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.675089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d700) on tqpair=0x122e630 00:27:49.528 [2024-07-23 06:23:42.675122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.675133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122e630) 00:27:49.528 [2024-07-23 06:23:42.675144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.528 [2024-07-23 06:23:42.675180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d700, cid 5, qid 0 00:27:49.528 [2024-07-23 06:23:42.675417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.528 [2024-07-23 06:23:42.675433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.528 [2024-07-23 06:23:42.675440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.675447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d700) on tqpair=0x122e630 00:27:49.528 [2024-07-23 06:23:42.675466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.675477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122e630) 00:27:49.528 [2024-07-23 06:23:42.675488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.528 [2024-07-23 06:23:42.675509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d700, cid 5, qid 0 00:27:49.528 [2024-07-23 06:23:42.675667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.528 [2024-07-23 06:23:42.675684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.528 [2024-07-23 06:23:42.675691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.675698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d700) on tqpair=0x122e630 00:27:49.528 [2024-07-23 06:23:42.675716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.528 [2024-07-23 06:23:42.675727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122e630) 00:27:49.529 [2024-07-23 06:23:42.675738] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.529 [2024-07-23 06:23:42.675761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d700, cid 5, qid 0 00:27:49.529 [2024-07-23 06:23:42.675906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.529 [2024-07-23 06:23:42.675922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.529 [2024-07-23 06:23:42.675929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.675936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d700) on tqpair=0x122e630 00:27:49.529 [2024-07-23 06:23:42.675963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.675976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122e630) 00:27:49.529 [2024-07-23 06:23:42.675987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.529 [2024-07-23 06:23:42.675999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.676006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122e630) 00:27:49.529 [2024-07-23 06:23:42.676016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.529 [2024-07-23 06:23:42.676027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.676034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x122e630) 00:27:49.529 [2024-07-23 06:23:42.676061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.529 [2024-07-23 06:23:42.676074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.676081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x122e630) 00:27:49.529 [2024-07-23 06:23:42.676090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.529 [2024-07-23 06:23:42.676111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d700, cid 5, qid 0 00:27:49.529 [2024-07-23 06:23:42.676136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d580, cid 4, qid 0 00:27:49.529 [2024-07-23 06:23:42.676144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d880, cid 6, qid 0 00:27:49.529 [2024-07-23 06:23:42.676152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127da00, cid 7, qid 0 00:27:49.529 [2024-07-23 06:23:42.676508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.529 [2024-07-23 06:23:42.676528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.529 [2024-07-23 06:23:42.676540] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.676549] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122e630): datao=0, datal=8192, cccid=5 00:27:49.529 [2024-07-23 06:23:42.676575] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127d700) on tqpair(0x122e630): expected_datao=0, payload_size=8192 00:27:49.529 [2024-07-23 06:23:42.676586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680641] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680655] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.529 [2024-07-23 06:23:42.680679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.529 [2024-07-23 06:23:42.680686] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680692] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122e630): datao=0, datal=512, cccid=4 00:27:49.529 [2024-07-23 06:23:42.680700] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127d580) on tqpair(0x122e630): expected_datao=0, payload_size=512 00:27:49.529 [2024-07-23 06:23:42.680707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680717] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680724] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.529 [2024-07-23 06:23:42.680740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.529 [2024-07-23 06:23:42.680747] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680753] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122e630): datao=0, datal=512, cccid=6 00:27:49.529 [2024-07-23 06:23:42.680761] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127d880) on tqpair(0x122e630): expected_datao=0, payload_size=512 00:27:49.529 [2024-07-23 06:23:42.680768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680777] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680784] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.529 [2024-07-23 06:23:42.680801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.529 [2024-07-23 06:23:42.680808] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680814] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122e630): datao=0, datal=4096, cccid=7 00:27:49.529 [2024-07-23 06:23:42.680825] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127da00) on tqpair(0x122e630): expected_datao=0, payload_size=4096 00:27:49.529 [2024-07-23 06:23:42.680833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680842] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680849] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.529 [2024-07-23 06:23:42.680865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.529 [2024-07-23 06:23:42.680872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d700) on tqpair=0x122e630 00:27:49.529 [2024-07-23 06:23:42.680897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.529 [2024-07-23 06:23:42.680907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.529 [2024-07-23 06:23:42.680914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d580) on tqpair=0x122e630 00:27:49.529 [2024-07-23 06:23:42.680949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.529 [2024-07-23 06:23:42.680959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.529 [2024-07-23 06:23:42.680965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.680972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d880) on tqpair=0x122e630 00:27:49.529 [2024-07-23 06:23:42.680982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.529 [2024-07-23 06:23:42.680990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.529 [2024-07-23 06:23:42.680997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.529 [2024-07-23 06:23:42.681003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127da00) on tqpair=0x122e630 00:27:49.529 ===================================================== 00:27:49.529 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.529 ===================================================== 00:27:49.529 Controller Capabilities/Features 00:27:49.529 ================================ 00:27:49.529 Vendor ID: 8086 00:27:49.529 Subsystem Vendor ID: 8086 00:27:49.529 Serial Number: SPDK00000000000001 00:27:49.529 Model Number: SPDK bdev Controller 00:27:49.529 Firmware Version: 24.09 00:27:49.529 Recommended Arb Burst: 6 00:27:49.529 IEEE OUI Identifier: e4 d2 5c 00:27:49.529 Multi-path I/O 00:27:49.529 May have multiple subsystem ports: Yes 00:27:49.529 May have multiple controllers: Yes 00:27:49.529 Associated with SR-IOV VF: No 00:27:49.529 Max Data Transfer Size: 131072 00:27:49.529 Max Number of Namespaces: 32 00:27:49.529 Max Number of I/O Queues: 127 00:27:49.529 NVMe Specification Version (VS): 1.3 00:27:49.529 NVMe Specification Version (Identify): 1.3 00:27:49.529 Maximum Queue Entries: 128 00:27:49.529 Contiguous Queues Required: Yes 00:27:49.529 Arbitration Mechanisms Supported 00:27:49.529 Weighted Round Robin: Not Supported 00:27:49.529 Vendor Specific: Not Supported 00:27:49.529 Reset Timeout: 15000 ms 00:27:49.529 Doorbell Stride: 4 bytes 00:27:49.529 NVM Subsystem Reset: Not Supported 00:27:49.529 Command Sets Supported 00:27:49.529 NVM Command Set: Supported 00:27:49.529 Boot Partition: Not Supported 00:27:49.529 Memory Page Size Minimum: 4096 bytes 00:27:49.529 Memory Page Size Maximum: 4096 bytes 00:27:49.529 Persistent Memory Region: Not Supported 00:27:49.529 Optional Asynchronous Events Supported 00:27:49.529 Namespace Attribute Notices: Supported 00:27:49.529 Firmware Activation Notices: Not Supported 00:27:49.529 ANA Change Notices: Not Supported 00:27:49.529 PLE Aggregate Log Change Notices: Not Supported 00:27:49.529 LBA Status Info Alert Notices: Not Supported 00:27:49.529 EGE Aggregate Log Change Notices: Not Supported 00:27:49.529 Normal NVM Subsystem Shutdown event: Not Supported 00:27:49.529 Zone Descriptor Change Notices: Not Supported 00:27:49.529 Discovery Log Change Notices: Not Supported 00:27:49.529 Controller Attributes 00:27:49.529 128-bit Host Identifier: Supported 00:27:49.529 Non-Operational Permissive Mode: Not Supported 00:27:49.529 NVM Sets: Not Supported 00:27:49.529 Read Recovery Levels: Not Supported 00:27:49.529 Endurance Groups: Not Supported 00:27:49.529 Predictable Latency Mode: Not Supported 00:27:49.529 Traffic Based Keep ALive: Not Supported 00:27:49.529 Namespace Granularity: Not Supported 00:27:49.529 SQ Associations: Not Supported 00:27:49.529 UUID List: Not Supported 00:27:49.529 Multi-Domain Subsystem: Not Supported 00:27:49.529 Fixed Capacity Management: Not Supported 00:27:49.529 Variable Capacity Management: Not Supported 00:27:49.529 Delete Endurance Group: Not Supported 00:27:49.529 Delete NVM Set: Not Supported 00:27:49.530 Extended LBA Formats Supported: Not Supported 00:27:49.530 Flexible Data Placement Supported: Not Supported 00:27:49.530 00:27:49.530 Controller Memory Buffer Support 00:27:49.530 ================================ 00:27:49.530 Supported: No 00:27:49.530 00:27:49.530 Persistent Memory Region Support 00:27:49.530 ================================ 00:27:49.530 Supported: No 00:27:49.530 00:27:49.530 Admin Command Set Attributes 00:27:49.530 ============================ 00:27:49.530 Security Send/Receive: Not Supported 00:27:49.530 Format NVM: Not Supported 00:27:49.530 Firmware Activate/Download: Not Supported 00:27:49.530 Namespace Management: Not Supported 00:27:49.530 Device Self-Test: Not Supported 00:27:49.530 Directives: Not Supported 00:27:49.530 NVMe-MI: Not Supported 00:27:49.530 Virtualization Management: Not Supported 00:27:49.530 Doorbell Buffer Config: Not Supported 00:27:49.530 Get LBA Status Capability: Not Supported 00:27:49.530 Command & Feature Lockdown Capability: Not Supported 00:27:49.530 Abort Command Limit: 4 00:27:49.530 Async Event Request Limit: 4 00:27:49.530 Number of Firmware Slots: N/A 00:27:49.530 Firmware Slot 1 Read-Only: N/A 00:27:49.530 Firmware Activation Without Reset: N/A 00:27:49.530 Multiple Update Detection Support: N/A 00:27:49.530 Firmware Update Granularity: No Information Provided 00:27:49.530 Per-Namespace SMART Log: No 00:27:49.530 Asymmetric Namespace Access Log Page: Not Supported 00:27:49.530 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:49.530 Command Effects Log Page: Supported 00:27:49.530 Get Log Page Extended Data: Supported 00:27:49.530 Telemetry Log Pages: Not Supported 00:27:49.530 Persistent Event Log Pages: Not Supported 00:27:49.530 Supported Log Pages Log Page: May Support 00:27:49.530 Commands Supported & Effects Log Page: Not Supported 00:27:49.530 Feature Identifiers & Effects Log Page:May Support 00:27:49.530 NVMe-MI Commands & Effects Log Page: May Support 00:27:49.530 Data Area 4 for Telemetry Log: Not Supported 00:27:49.530 Error Log Page Entries Supported: 128 00:27:49.530 Keep Alive: Supported 00:27:49.530 Keep Alive Granularity: 10000 ms 00:27:49.530 00:27:49.530 NVM Command Set Attributes 00:27:49.530 ========================== 00:27:49.530 Submission Queue Entry Size 00:27:49.530 Max: 64 00:27:49.530 Min: 64 00:27:49.530 Completion Queue Entry Size 00:27:49.530 Max: 16 00:27:49.530 Min: 16 00:27:49.530 Number of Namespaces: 32 00:27:49.530 Compare Command: Supported 00:27:49.530 Write Uncorrectable Command: Not Supported 00:27:49.530 Dataset Management Command: Supported 00:27:49.530 Write Zeroes Command: Supported 00:27:49.530 Set Features Save Field: Not Supported 00:27:49.530 Reservations: Supported 00:27:49.530 Timestamp: Not Supported 00:27:49.530 Copy: Supported 00:27:49.530 Volatile Write Cache: Present 00:27:49.530 Atomic Write Unit (Normal): 1 00:27:49.530 Atomic Write Unit (PFail): 1 00:27:49.530 Atomic Compare & Write Unit: 1 00:27:49.530 Fused Compare & Write: Supported 00:27:49.530 Scatter-Gather List 00:27:49.530 SGL Command Set: Supported 00:27:49.530 SGL Keyed: Supported 00:27:49.530 SGL Bit Bucket Descriptor: Not Supported 00:27:49.530 SGL Metadata Pointer: Not Supported 00:27:49.530 Oversized SGL: Not Supported 00:27:49.530 SGL Metadata Address: Not Supported 00:27:49.530 SGL Offset: Supported 00:27:49.530 Transport SGL Data Block: Not Supported 00:27:49.530 Replay Protected Memory Block: Not Supported 00:27:49.530 00:27:49.530 Firmware Slot Information 00:27:49.530 ========================= 00:27:49.530 Active slot: 1 00:27:49.530 Slot 1 Firmware Revision: 24.09 00:27:49.530 00:27:49.530 00:27:49.530 Commands Supported and Effects 00:27:49.530 ============================== 00:27:49.530 Admin Commands 00:27:49.530 -------------- 00:27:49.530 Get Log Page (02h): Supported 00:27:49.530 Identify (06h): Supported 00:27:49.530 Abort (08h): Supported 00:27:49.530 Set Features (09h): Supported 00:27:49.530 Get Features (0Ah): Supported 00:27:49.530 Asynchronous Event Request (0Ch): Supported 00:27:49.530 Keep Alive (18h): Supported 00:27:49.530 I/O Commands 00:27:49.530 ------------ 00:27:49.530 Flush (00h): Supported LBA-Change 00:27:49.530 Write (01h): Supported LBA-Change 00:27:49.530 Read (02h): Supported 00:27:49.530 Compare (05h): Supported 00:27:49.530 Write Zeroes (08h): Supported LBA-Change 00:27:49.530 Dataset Management (09h): Supported LBA-Change 00:27:49.530 Copy (19h): Supported LBA-Change 00:27:49.530 00:27:49.530 Error Log 00:27:49.530 ========= 00:27:49.530 00:27:49.530 Arbitration 00:27:49.530 =========== 00:27:49.530 Arbitration Burst: 1 00:27:49.530 00:27:49.530 Power Management 00:27:49.530 ================ 00:27:49.530 Number of Power States: 1 00:27:49.530 Current Power State: Power State #0 00:27:49.530 Power State #0: 00:27:49.530 Max Power: 0.00 W 00:27:49.530 Non-Operational State: Operational 00:27:49.530 Entry Latency: Not Reported 00:27:49.530 Exit Latency: Not Reported 00:27:49.530 Relative Read Throughput: 0 00:27:49.530 Relative Read Latency: 0 00:27:49.530 Relative Write Throughput: 0 00:27:49.530 Relative Write Latency: 0 00:27:49.530 Idle Power: Not Reported 00:27:49.530 Active Power: Not Reported 00:27:49.530 Non-Operational Permissive Mode: Not Supported 00:27:49.530 00:27:49.530 Health Information 00:27:49.530 ================== 00:27:49.530 Critical Warnings: 00:27:49.530 Available Spare Space: OK 00:27:49.530 Temperature: OK 00:27:49.530 Device Reliability: OK 00:27:49.530 Read Only: No 00:27:49.530 Volatile Memory Backup: OK 00:27:49.530 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:49.530 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:49.530 Available Spare: 0% 00:27:49.530 Available Spare Threshold: 0% 00:27:49.530 Life Percentage Used:[2024-07-23 06:23:42.681116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.530 [2024-07-23 06:23:42.681128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x122e630) 00:27:49.530 [2024-07-23 06:23:42.681139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-07-23 06:23:42.681162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127da00, cid 7, qid 0 00:27:49.530 [2024-07-23 06:23:42.681442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.530 [2024-07-23 06:23:42.681458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.530 [2024-07-23 06:23:42.681465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.530 [2024-07-23 06:23:42.681472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127da00) on tqpair=0x122e630 00:27:49.530 [2024-07-23 06:23:42.681523] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:49.530 [2024-07-23 06:23:42.681545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127cf80) on tqpair=0x122e630 00:27:49.530 [2024-07-23 06:23:42.681571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.530 [2024-07-23 06:23:42.681579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d100) on tqpair=0x122e630 00:27:49.530 [2024-07-23 06:23:42.681587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.530 [2024-07-23 06:23:42.681595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d280) on tqpair=0x122e630 00:27:49.530 [2024-07-23 06:23:42.681602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.530 [2024-07-23 06:23:42.681639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.530 [2024-07-23 06:23:42.681650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.531 [2024-07-23 06:23:42.681663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.681671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.681677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.681688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.681711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.681897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.681914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.681921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.681928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.681940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.681948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.681954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.681965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.681994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.682148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.682164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.682171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.682191] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:49.531 [2024-07-23 06:23:42.682199] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:49.531 [2024-07-23 06:23:42.682216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682236] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.682247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.682283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.682509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.682525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.682532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.682558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.682588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.682609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.682779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.682796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.682803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.682829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.682847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.682858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.682881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.683053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.683070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.683077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.683104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.683133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.683155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.683293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.683310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.683317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.683342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.683372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.683393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.683536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.683552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.683559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.683588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.683624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.683650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.683823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.683843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.683852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.683878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.683896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.683907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.683929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.684179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.684196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.684203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.684210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.684244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.684255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.684261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.684272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.684293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.531 [2024-07-23 06:23:42.684483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.531 [2024-07-23 06:23:42.684501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.531 [2024-07-23 06:23:42.684508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.684515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.531 [2024-07-23 06:23:42.684533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.684546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.531 [2024-07-23 06:23:42.684553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122e630) 00:27:49.531 [2024-07-23 06:23:42.684564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-07-23 06:23:42.684586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127d400, cid 3, qid 0 00:27:49.532 [2024-07-23 06:23:42.688638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.532 [2024-07-23 06:23:42.688655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.532 [2024-07-23 06:23:42.688662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.532 [2024-07-23 06:23:42.688669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127d400) on tqpair=0x122e630 00:27:49.532 [2024-07-23 06:23:42.688684] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:27:49.532 0% 00:27:49.532 Data Units Read: 0 00:27:49.532 Data Units Written: 0 00:27:49.532 Host Read Commands: 0 00:27:49.532 Host Write Commands: 0 00:27:49.532 Controller Busy Time: 0 minutes 00:27:49.532 Power Cycles: 0 00:27:49.532 Power On Hours: 0 hours 00:27:49.532 Unsafe Shutdowns: 0 00:27:49.532 Unrecoverable Media Errors: 0 00:27:49.532 Lifetime Error Log Entries: 0 00:27:49.532 Warning Temperature Time: 0 minutes 00:27:49.532 Critical Temperature Time: 0 minutes 00:27:49.532 00:27:49.532 Number of Queues 00:27:49.532 ================ 00:27:49.532 Number of I/O Submission Queues: 127 00:27:49.532 Number of I/O Completion Queues: 127 00:27:49.532 00:27:49.532 Active Namespaces 00:27:49.532 ================= 00:27:49.532 Namespace ID:1 00:27:49.532 Error Recovery Timeout: Unlimited 00:27:49.532 Command Set Identifier: NVM (00h) 00:27:49.532 Deallocate: Supported 00:27:49.532 Deallocated/Unwritten Error: Not Supported 00:27:49.532 Deallocated Read Value: Unknown 00:27:49.532 Deallocate in Write Zeroes: Not Supported 00:27:49.532 Deallocated Guard Field: 0xFFFF 00:27:49.532 Flush: Supported 00:27:49.532 Reservation: Supported 00:27:49.532 Namespace Sharing Capabilities: Multiple Controllers 00:27:49.532 Size (in LBAs): 131072 (0GiB) 00:27:49.532 Capacity (in LBAs): 131072 (0GiB) 00:27:49.532 Utilization (in LBAs): 131072 (0GiB) 00:27:49.532 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:49.532 EUI64: ABCDEF0123456789 00:27:49.532 UUID: 33b9fbce-ca25-42c0-a9ae-6fe36cba29f8 00:27:49.532 Thin Provisioning: Not Supported 00:27:49.532 Per-NS Atomic Units: Yes 00:27:49.532 Atomic Boundary Size (Normal): 0 00:27:49.532 Atomic Boundary Size (PFail): 0 00:27:49.532 Atomic Boundary Offset: 0 00:27:49.532 Maximum Single Source Range Length: 65535 00:27:49.532 Maximum Copy Length: 65535 00:27:49.532 Maximum Source Range Count: 1 00:27:49.532 NGUID/EUI64 Never Reused: No 00:27:49.532 Namespace Write Protected: No 00:27:49.532 Number of LBA Formats: 1 00:27:49.532 Current LBA Format: LBA Format #00 00:27:49.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:49.532 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.532 rmmod nvme_tcp 00:27:49.532 rmmod nvme_fabrics 00:27:49.532 rmmod nvme_keyring 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1830010 ']' 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1830010 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1830010 ']' 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1830010 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1830010 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1830010' 00:27:49.532 killing process with pid 1830010 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1830010 00:27:49.532 06:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1830010 00:27:49.792 06:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:49.792 06:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:49.792 06:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:49.792 06:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.792 06:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.792 06:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.792 06:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.792 06:23:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.334 00:27:52.334 real 0m5.279s 00:27:52.334 user 0m4.218s 00:27:52.334 sys 0m1.821s 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:52.334 ************************************ 00:27:52.334 END TEST nvmf_identify 00:27:52.334 ************************************ 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.334 ************************************ 00:27:52.334 START TEST nvmf_perf 00:27:52.334 ************************************ 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:52.334 * Looking for test storage... 00:27:52.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:52.334 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.335 06:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.238 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.238 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.238 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:27:54.239 00:27:54.239 --- 10.0.0.2 ping statistics --- 00:27:54.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.239 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:27:54.239 00:27:54.239 --- 10.0.0.1 ping statistics --- 00:27:54.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.239 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1832076 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1832076 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1832076 ']' 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.239 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:54.239 [2024-07-23 06:23:47.309651] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:27:54.239 [2024-07-23 06:23:47.309735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.239 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.239 [2024-07-23 06:23:47.352484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:54.239 [2024-07-23 06:23:47.383494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.239 [2024-07-23 06:23:47.477417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.239 [2024-07-23 06:23:47.477472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.239 [2024-07-23 06:23:47.477487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.239 [2024-07-23 06:23:47.477498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.239 [2024-07-23 06:23:47.477509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.239 [2024-07-23 06:23:47.477573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.239 [2024-07-23 06:23:47.477640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.239 [2024-07-23 06:23:47.477681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.239 [2024-07-23 06:23:47.477684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.497 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.497 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:27:54.497 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:54.497 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.497 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:54.497 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.497 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:54.497 06:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:57.781 06:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:57.781 06:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:57.781 06:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:57.781 06:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:58.038 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:58.038 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:58.038 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:58.038 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:58.038 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:58.295 [2024-07-23 06:23:51.481323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.295 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.552 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:58.552 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:58.810 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:58.810 06:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:59.068 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.326 [2024-07-23 06:23:52.468973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.326 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:59.583 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:59.583 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:59.583 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:59.583 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:00.969 Initializing NVMe Controllers 00:28:00.969 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:00.969 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:00.969 Initialization complete. Launching workers. 00:28:00.969 ======================================================== 00:28:00.969 Latency(us) 00:28:00.969 Device Information : IOPS MiB/s Average min max 00:28:00.969 PCIE (0000:88:00.0) NSID 1 from core 0: 85987.18 335.89 371.66 33.91 8275.39 00:28:00.969 ======================================================== 00:28:00.969 Total : 85987.18 335.89 371.66 33.91 8275.39 00:28:00.969 00:28:00.969 06:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.969 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.901 Initializing NVMe Controllers 00:28:01.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:01.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:01.901 Initialization complete. Launching workers. 00:28:01.901 ======================================================== 00:28:01.902 Latency(us) 00:28:01.902 Device Information : IOPS MiB/s Average min max 00:28:01.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.00 0.36 11126.16 196.70 45715.04 00:28:01.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15824.26 7033.88 47901.36 00:28:01.902 ======================================================== 00:28:01.902 Total : 157.00 0.61 13101.16 196.70 47901.36 00:28:01.902 00:28:01.902 06:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:01.902 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.275 Initializing NVMe Controllers 00:28:03.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:03.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:03.275 Initialization complete. Launching workers. 00:28:03.275 ======================================================== 00:28:03.275 Latency(us) 00:28:03.275 Device Information : IOPS MiB/s Average min max 00:28:03.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7758.98 30.31 4134.24 547.85 10875.02 00:28:03.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3854.99 15.06 8338.59 4414.91 15927.01 00:28:03.275 ======================================================== 00:28:03.275 Total : 11613.98 45.37 5529.77 547.85 15927.01 00:28:03.275 00:28:03.275 06:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:03.275 06:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:03.275 06:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:03.275 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.556 Initializing NVMe Controllers 00:28:06.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.556 Controller IO queue size 128, less than required. 00:28:06.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.556 Controller IO queue size 128, less than required. 00:28:06.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:06.556 Initialization complete. Launching workers. 00:28:06.556 ======================================================== 00:28:06.556 Latency(us) 00:28:06.556 Device Information : IOPS MiB/s Average min max 00:28:06.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 978.83 244.71 134070.60 71435.39 185390.60 00:28:06.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.97 150.74 222329.15 55851.53 346699.17 00:28:06.556 ======================================================== 00:28:06.556 Total : 1581.79 395.45 167714.12 55851.53 346699.17 00:28:06.556 00:28:06.556 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:06.556 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.556 No valid NVMe controllers or AIO or URING devices found 00:28:06.556 Initializing NVMe Controllers 00:28:06.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.556 Controller IO queue size 128, less than required. 00:28:06.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.556 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:06.556 Controller IO queue size 128, less than required. 00:28:06.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.556 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:06.556 WARNING: Some requested NVMe devices were skipped 00:28:06.556 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:06.556 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.085 Initializing NVMe Controllers 00:28:09.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.085 Controller IO queue size 128, less than required. 00:28:09.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.085 Controller IO queue size 128, less than required. 00:28:09.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:09.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:09.085 Initialization complete. Launching workers. 00:28:09.085 00:28:09.085 ==================== 00:28:09.085 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:09.085 TCP transport: 00:28:09.085 polls: 31662 00:28:09.085 idle_polls: 12052 00:28:09.085 sock_completions: 19610 00:28:09.085 nvme_completions: 3737 00:28:09.085 submitted_requests: 5620 00:28:09.085 queued_requests: 1 00:28:09.085 00:28:09.085 ==================== 00:28:09.085 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:09.085 TCP transport: 00:28:09.085 polls: 32127 00:28:09.085 idle_polls: 12641 00:28:09.085 sock_completions: 19486 00:28:09.085 nvme_completions: 3845 00:28:09.085 submitted_requests: 5784 00:28:09.085 queued_requests: 1 00:28:09.085 ======================================================== 00:28:09.085 Latency(us) 00:28:09.085 Device Information : IOPS MiB/s Average min max 00:28:09.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 933.99 233.50 141325.71 82191.90 246715.00 00:28:09.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 960.99 240.25 136477.80 63497.42 205504.42 00:28:09.085 ======================================================== 00:28:09.085 Total : 1894.98 473.74 138867.22 63497.42 246715.00 00:28:09.085 00:28:09.085 06:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:09.085 06:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:09.085 06:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:09.085 06:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:09.085 06:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:12.362 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9d8eb063-15c8-4fa4-81a1-487d8c8b476d 00:28:12.362 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9d8eb063-15c8-4fa4-81a1-487d8c8b476d 00:28:12.362 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=9d8eb063-15c8-4fa4-81a1-487d8c8b476d 00:28:12.362 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:12.362 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:12.362 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:12.362 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:12.619 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:12.619 { 00:28:12.619 "uuid": "9d8eb063-15c8-4fa4-81a1-487d8c8b476d", 00:28:12.619 "name": "lvs_0", 00:28:12.619 "base_bdev": "Nvme0n1", 00:28:12.619 "total_data_clusters": 238234, 00:28:12.619 "free_clusters": 238234, 00:28:12.619 "block_size": 512, 00:28:12.619 "cluster_size": 4194304 00:28:12.619 } 00:28:12.619 ]' 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9d8eb063-15c8-4fa4-81a1-487d8c8b476d") .free_clusters' 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9d8eb063-15c8-4fa4-81a1-487d8c8b476d") .cluster_size' 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:12.620 952936 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:12.620 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d8eb063-15c8-4fa4-81a1-487d8c8b476d lbd_0 20480 00:28:13.186 06:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=f10181b3-d0ab-4879-8006-55c72a5ce8d6 00:28:13.186 06:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore f10181b3-d0ab-4879-8006-55c72a5ce8d6 lvs_n_0 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=02fb8e24-92ac-400e-b5e4-dd94315de5fd 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 02fb8e24-92ac-400e-b5e4-dd94315de5fd 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=02fb8e24-92ac-400e-b5e4-dd94315de5fd 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:14.128 { 00:28:14.128 "uuid": "9d8eb063-15c8-4fa4-81a1-487d8c8b476d", 00:28:14.128 "name": "lvs_0", 00:28:14.128 "base_bdev": "Nvme0n1", 00:28:14.128 "total_data_clusters": 238234, 00:28:14.128 "free_clusters": 233114, 00:28:14.128 "block_size": 512, 00:28:14.128 "cluster_size": 4194304 00:28:14.128 }, 00:28:14.128 { 00:28:14.128 "uuid": "02fb8e24-92ac-400e-b5e4-dd94315de5fd", 00:28:14.128 "name": "lvs_n_0", 00:28:14.128 "base_bdev": "f10181b3-d0ab-4879-8006-55c72a5ce8d6", 00:28:14.128 "total_data_clusters": 5114, 00:28:14.128 "free_clusters": 5114, 00:28:14.128 "block_size": 512, 00:28:14.128 "cluster_size": 4194304 00:28:14.128 } 00:28:14.128 ]' 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="02fb8e24-92ac-400e-b5e4-dd94315de5fd") .free_clusters' 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:14.128 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="02fb8e24-92ac-400e-b5e4-dd94315de5fd") .cluster_size' 00:28:14.386 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:14.386 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:14.386 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:14.386 20456 00:28:14.386 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:14.386 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 02fb8e24-92ac-400e-b5e4-dd94315de5fd lbd_nest_0 20456 00:28:14.645 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ab1330be-9bd7-4f50-acac-508aae4580b2 00:28:14.645 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:14.902 06:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:14.903 06:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ab1330be-9bd7-4f50-acac-508aae4580b2 00:28:15.161 06:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.419 06:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:15.419 06:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:15.419 06:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:15.419 06:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:15.419 06:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:15.419 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.623 Initializing NVMe Controllers 00:28:27.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.623 Initialization complete. Launching workers. 00:28:27.623 ======================================================== 00:28:27.623 Latency(us) 00:28:27.623 Device Information : IOPS MiB/s Average min max 00:28:27.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.90 0.02 20925.50 253.31 46036.85 00:28:27.623 ======================================================== 00:28:27.623 Total : 47.90 0.02 20925.50 253.31 46036.85 00:28:27.623 00:28:27.623 06:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:27.623 06:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.623 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.603 Initializing NVMe Controllers 00:28:37.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.603 Initialization complete. Launching workers. 00:28:37.603 ======================================================== 00:28:37.603 Latency(us) 00:28:37.603 Device Information : IOPS MiB/s Average min max 00:28:37.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.90 9.11 13736.73 4042.40 47899.97 00:28:37.603 ======================================================== 00:28:37.603 Total : 72.90 9.11 13736.73 4042.40 47899.97 00:28:37.603 00:28:37.603 06:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:37.603 06:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:37.603 06:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.603 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.589 Initializing NVMe Controllers 00:28:47.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:47.589 Initialization complete. Launching workers. 00:28:47.589 ======================================================== 00:28:47.589 Latency(us) 00:28:47.589 Device Information : IOPS MiB/s Average min max 00:28:47.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6998.98 3.42 4572.80 378.35 12086.49 00:28:47.589 ======================================================== 00:28:47.589 Total : 6998.98 3.42 4572.80 378.35 12086.49 00:28:47.589 00:28:47.589 06:24:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:47.589 06:24:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:47.589 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.596 Initializing NVMe Controllers 00:28:57.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:57.596 Initialization complete. Launching workers. 00:28:57.596 ======================================================== 00:28:57.596 Latency(us) 00:28:57.596 Device Information : IOPS MiB/s Average min max 00:28:57.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1803.77 225.47 17747.47 1828.21 38230.27 00:28:57.596 ======================================================== 00:28:57.596 Total : 1803.77 225.47 17747.47 1828.21 38230.27 00:28:57.596 00:28:57.596 06:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:57.596 06:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:57.596 06:24:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:57.596 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.581 Initializing NVMe Controllers 00:29:07.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.581 Controller IO queue size 128, less than required. 00:29:07.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.581 Initialization complete. Launching workers. 00:29:07.581 ======================================================== 00:29:07.581 Latency(us) 00:29:07.581 Device Information : IOPS MiB/s Average min max 00:29:07.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11894.47 5.81 10761.39 1767.65 47760.52 00:29:07.581 ======================================================== 00:29:07.581 Total : 11894.47 5.81 10761.39 1767.65 47760.52 00:29:07.581 00:29:07.581 06:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:07.582 06:25:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.582 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.570 Initializing NVMe Controllers 00:29:17.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.570 Controller IO queue size 128, less than required. 00:29:17.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:17.571 Initialization complete. Launching workers. 00:29:17.571 ======================================================== 00:29:17.571 Latency(us) 00:29:17.571 Device Information : IOPS MiB/s Average min max 00:29:17.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1211.38 151.42 106272.03 22756.42 243639.92 00:29:17.571 ======================================================== 00:29:17.571 Total : 1211.38 151.42 106272.03 22756.42 243639.92 00:29:17.571 00:29:17.571 06:25:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:17.830 06:25:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ab1330be-9bd7-4f50-acac-508aae4580b2 00:29:18.399 06:25:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:18.657 06:25:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f10181b3-d0ab-4879-8006-55c72a5ce8d6 00:29:18.915 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:19.173 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:19.173 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:19.173 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:19.173 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:19.173 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:19.173 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:19.173 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:19.173 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:19.173 rmmod nvme_tcp 00:29:19.173 rmmod nvme_fabrics 00:29:19.173 rmmod nvme_keyring 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1832076 ']' 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1832076 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1832076 ']' 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1832076 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1832076 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1832076' 00:29:19.432 killing process with pid 1832076 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1832076 00:29:19.432 06:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1832076 00:29:20.839 06:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:20.840 06:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.840 06:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.840 06:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.840 06:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.840 06:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.840 06:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.840 06:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:23.379 00:29:23.379 real 1m31.043s 00:29:23.379 user 5m36.571s 00:29:23.379 sys 0m15.707s 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:23.379 ************************************ 00:29:23.379 END TEST nvmf_perf 00:29:23.379 ************************************ 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.379 ************************************ 00:29:23.379 START TEST nvmf_fio_host 00:29:23.379 ************************************ 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:23.379 * Looking for test storage... 00:29:23.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.379 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:23.380 06:25:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:25.284 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:25.284 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:25.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:25.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.284 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:25.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:29:25.285 00:29:25.285 --- 10.0.0.2 ping statistics --- 00:29:25.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.285 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:29:25.285 00:29:25.285 --- 10.0.0.1 ping statistics --- 00:29:25.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.285 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1844668 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1844668 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1844668 ']' 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.285 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.285 [2024-07-23 06:25:18.453566] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:29:25.285 [2024-07-23 06:25:18.453647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.285 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.285 [2024-07-23 06:25:18.492489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:25.285 [2024-07-23 06:25:18.525360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.285 [2024-07-23 06:25:18.622645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.285 [2024-07-23 06:25:18.622718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.285 [2024-07-23 06:25:18.622734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.285 [2024-07-23 06:25:18.622746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.285 [2024-07-23 06:25:18.622757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.285 [2024-07-23 06:25:18.622823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.285 [2024-07-23 06:25:18.622908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.285 [2024-07-23 06:25:18.622932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.285 [2024-07-23 06:25:18.622936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.544 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.544 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:25.544 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:25.802 [2024-07-23 06:25:18.965455] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.802 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:25.802 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:25.802 06:25:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.802 06:25:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:26.059 Malloc1 00:29:26.059 06:25:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.317 06:25:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:26.575 06:25:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.833 [2024-07-23 06:25:20.012873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.833 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:27.091 06:25:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:27.349 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:27.349 fio-3.35 00:29:27.349 Starting 1 thread 00:29:27.349 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.876 00:29:29.876 test: (groupid=0, jobs=1): err= 0: pid=1845062: Tue Jul 23 06:25:22 2024 00:29:29.876 read: IOPS=9348, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2006msec) 00:29:29.876 slat (nsec): min=1904, max=110684, avg=2440.43, stdev=1361.48 00:29:29.876 clat (usec): min=3211, max=13103, avg=7584.24, stdev=546.84 00:29:29.876 lat (usec): min=3233, max=13105, avg=7586.68, stdev=546.77 00:29:29.876 clat percentiles (usec): 00:29:29.876 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:29:29.876 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:29:29.876 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:29:29.876 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[11338], 99.95th=[11994], 00:29:29.876 | 99.99th=[13042] 00:29:29.876 bw ( KiB/s): min=36359, max=38112, per=99.87%, avg=37347.75, stdev=730.53, samples=4 00:29:29.876 iops : min= 9089, max= 9528, avg=9336.75, stdev=182.97, samples=4 00:29:29.876 write: IOPS=9352, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2006msec); 0 zone resets 00:29:29.876 slat (nsec): min=2097, max=91340, avg=2581.27, stdev=1102.76 00:29:29.876 clat (usec): min=1202, max=11841, avg=6073.26, stdev=487.80 00:29:29.876 lat (usec): min=1209, max=11843, avg=6075.85, stdev=487.78 00:29:29.876 clat percentiles (usec): 00:29:29.876 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:29:29.876 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:29:29.876 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:29:29.876 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 9634], 99.95th=[10421], 00:29:29.876 | 99.99th=[11863] 00:29:29.876 bw ( KiB/s): min=37136, max=37616, per=99.94%, avg=37391.25, stdev=198.64, samples=4 00:29:29.876 iops : min= 9284, max= 9404, avg=9347.75, stdev=49.67, samples=4 00:29:29.876 lat (msec) : 2=0.02%, 4=0.11%, 10=99.74%, 20=0.13% 00:29:29.876 cpu : usr=56.01%, sys=36.96%, ctx=61, majf=0, minf=38 00:29:29.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:29.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:29.876 issued rwts: total=18753,18762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:29.876 00:29:29.876 Run status group 0 (all jobs): 00:29:29.876 READ: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.8MB), run=2006-2006msec 00:29:29.876 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.8MB), run=2006-2006msec 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.876 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.877 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.877 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.877 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:29.877 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.877 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.877 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.877 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:29.877 06:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:29.877 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:29.877 fio-3.35 00:29:29.877 Starting 1 thread 00:29:29.877 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.405 00:29:32.405 test: (groupid=0, jobs=1): err= 0: pid=1845480: Tue Jul 23 06:25:25 2024 00:29:32.405 read: IOPS=8166, BW=128MiB/s (134MB/s)(256MiB/2010msec) 00:29:32.405 slat (nsec): min=2894, max=93821, avg=3813.72, stdev=1661.43 00:29:32.405 clat (usec): min=2161, max=17148, avg=9347.09, stdev=2351.94 00:29:32.405 lat (usec): min=2164, max=17152, avg=9350.90, stdev=2352.02 00:29:32.405 clat percentiles (usec): 00:29:32.405 | 1.00th=[ 4621], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7373], 00:29:32.405 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:29:32.405 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12387], 95.00th=[13698], 00:29:32.405 | 99.00th=[15401], 99.50th=[16057], 99.90th=[16909], 99.95th=[16909], 00:29:32.405 | 99.99th=[17171] 00:29:32.405 bw ( KiB/s): min=60384, max=77408, per=52.89%, avg=69104.00, stdev=7682.91, samples=4 00:29:32.405 iops : min= 3774, max= 4838, avg=4319.00, stdev=480.18, samples=4 00:29:32.405 write: IOPS=4752, BW=74.3MiB/s (77.9MB/s)(141MiB/1894msec); 0 zone resets 00:29:32.405 slat (usec): min=30, max=193, avg=34.19, stdev= 5.95 00:29:32.405 clat (usec): min=3435, max=18803, avg=10969.91, stdev=1936.51 00:29:32.405 lat (usec): min=3468, max=18834, avg=11004.10, stdev=1937.54 00:29:32.405 clat percentiles (usec): 00:29:32.405 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9241], 00:29:32.405 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11338], 00:29:32.405 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13566], 95.00th=[14484], 00:29:32.405 | 99.00th=[15926], 99.50th=[16319], 99.90th=[17171], 99.95th=[18482], 00:29:32.405 | 99.99th=[18744] 00:29:32.405 bw ( KiB/s): min=62496, max=79648, per=93.95%, avg=71440.00, stdev=7887.53, samples=4 00:29:32.405 iops : min= 3906, max= 4978, avg=4465.00, stdev=492.97, samples=4 00:29:32.405 lat (msec) : 4=0.26%, 10=52.32%, 20=47.42% 00:29:32.405 cpu : usr=74.96%, sys=21.35%, ctx=18, majf=0, minf=62 00:29:32.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:32.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:32.405 issued rwts: total=16415,9001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:32.405 00:29:32.405 Run status group 0 (all jobs): 00:29:32.405 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (269MB), run=2010-2010msec 00:29:32.405 WRITE: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=141MiB (147MB), run=1894-1894msec 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:29:32.405 06:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:35.693 Nvme0n1 00:29:35.693 06:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:38.984 06:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=82da6d6b-9c51-4f76-b656-4739f7350b4b 00:29:38.984 06:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 82da6d6b-9c51-4f76-b656-4739f7350b4b 00:29:38.984 06:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=82da6d6b-9c51-4f76-b656-4739f7350b4b 00:29:38.984 06:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:38.984 06:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:38.984 06:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:38.984 06:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:38.984 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:38.984 { 00:29:38.984 "uuid": "82da6d6b-9c51-4f76-b656-4739f7350b4b", 00:29:38.984 "name": "lvs_0", 00:29:38.984 "base_bdev": "Nvme0n1", 00:29:38.984 "total_data_clusters": 930, 00:29:38.984 "free_clusters": 930, 00:29:38.984 "block_size": 512, 00:29:38.984 "cluster_size": 1073741824 00:29:38.984 } 00:29:38.984 ]' 00:29:38.984 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="82da6d6b-9c51-4f76-b656-4739f7350b4b") .free_clusters' 00:29:38.984 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:38.984 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="82da6d6b-9c51-4f76-b656-4739f7350b4b") .cluster_size' 00:29:38.984 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:38.984 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:38.984 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:38.984 952320 00:29:38.984 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:39.242 7de7adc9-fef6-41a3-91b3-0982d53e416b 00:29:39.242 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:39.501 06:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:39.759 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:40.018 06:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:40.276 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:40.276 fio-3.35 00:29:40.276 Starting 1 thread 00:29:40.276 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.805 00:29:42.806 test: (groupid=0, jobs=1): err= 0: pid=1846755: Tue Jul 23 06:25:35 2024 00:29:42.806 read: IOPS=6159, BW=24.1MiB/s (25.2MB/s)(48.3MiB/2008msec) 00:29:42.806 slat (usec): min=2, max=176, avg= 2.78, stdev= 2.46 00:29:42.806 clat (usec): min=941, max=171553, avg=11438.96, stdev=11523.46 00:29:42.806 lat (usec): min=944, max=171594, avg=11441.74, stdev=11523.81 00:29:42.806 clat percentiles (msec): 00:29:42.806 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:29:42.806 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:29:42.806 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:29:42.806 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:42.806 | 99.99th=[ 171] 00:29:42.806 bw ( KiB/s): min=17048, max=27208, per=99.86%, avg=24602.00, stdev=5036.94, samples=4 00:29:42.806 iops : min= 4262, max= 6802, avg=6150.50, stdev=1259.24, samples=4 00:29:42.806 write: IOPS=6143, BW=24.0MiB/s (25.2MB/s)(48.2MiB/2008msec); 0 zone resets 00:29:42.806 slat (usec): min=2, max=137, avg= 2.94, stdev= 1.88 00:29:42.806 clat (usec): min=339, max=169523, avg=9170.57, stdev=10823.41 00:29:42.806 lat (usec): min=343, max=169531, avg=9173.51, stdev=10823.75 00:29:42.806 clat percentiles (msec): 00:29:42.806 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:29:42.806 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:42.806 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:29:42.806 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:29:42.806 | 99.99th=[ 169] 00:29:42.806 bw ( KiB/s): min=18040, max=26816, per=99.91%, avg=24554.00, stdev=4344.56, samples=4 00:29:42.806 iops : min= 4510, max= 6704, avg=6138.50, stdev=1086.14, samples=4 00:29:42.806 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:42.806 lat (msec) : 2=0.02%, 4=0.13%, 10=60.52%, 20=38.79%, 250=0.52% 00:29:42.806 cpu : usr=55.80%, sys=38.81%, ctx=85, majf=0, minf=38 00:29:42.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:42.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:42.806 issued rwts: total=12368,12337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:42.806 00:29:42.806 Run status group 0 (all jobs): 00:29:42.806 READ: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=48.3MiB (50.7MB), run=2008-2008msec 00:29:42.806 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.2MiB (50.5MB), run=2008-2008msec 00:29:42.806 06:25:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:42.806 06:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=202be4bc-935c-43e7-b82e-b3abd287b7f5 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 202be4bc-935c-43e7-b82e-b3abd287b7f5 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=202be4bc-935c-43e7-b82e-b3abd287b7f5 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:44.182 { 00:29:44.182 "uuid": "82da6d6b-9c51-4f76-b656-4739f7350b4b", 00:29:44.182 "name": "lvs_0", 00:29:44.182 "base_bdev": "Nvme0n1", 00:29:44.182 "total_data_clusters": 930, 00:29:44.182 "free_clusters": 0, 00:29:44.182 "block_size": 512, 00:29:44.182 "cluster_size": 1073741824 00:29:44.182 }, 00:29:44.182 { 00:29:44.182 "uuid": "202be4bc-935c-43e7-b82e-b3abd287b7f5", 00:29:44.182 "name": "lvs_n_0", 00:29:44.182 "base_bdev": "7de7adc9-fef6-41a3-91b3-0982d53e416b", 00:29:44.182 "total_data_clusters": 237847, 00:29:44.182 "free_clusters": 237847, 00:29:44.182 "block_size": 512, 00:29:44.182 "cluster_size": 4194304 00:29:44.182 } 00:29:44.182 ]' 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="202be4bc-935c-43e7-b82e-b3abd287b7f5") .free_clusters' 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="202be4bc-935c-43e7-b82e-b3abd287b7f5") .cluster_size' 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:44.182 951388 00:29:44.182 06:25:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:45.119 829914e2-1ba9-41cb-bee7-971c65fafdba 00:29:45.119 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:45.119 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:45.376 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:45.634 06:25:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:45.893 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:45.893 fio-3.35 00:29:45.893 Starting 1 thread 00:29:45.894 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.421 00:29:48.421 test: (groupid=0, jobs=1): err= 0: pid=1847492: Tue Jul 23 06:25:41 2024 00:29:48.421 read: IOPS=5816, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2008msec) 00:29:48.421 slat (usec): min=2, max=174, avg= 2.81, stdev= 2.66 00:29:48.421 clat (usec): min=4675, max=20474, avg=12175.18, stdev=1156.88 00:29:48.421 lat (usec): min=4684, max=20476, avg=12177.99, stdev=1156.72 00:29:48.421 clat percentiles (usec): 00:29:48.421 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:29:48.421 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:29:48.421 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13829], 00:29:48.421 | 99.00th=[16450], 99.50th=[18482], 99.90th=[20055], 99.95th=[20317], 00:29:48.421 | 99.99th=[20579] 00:29:48.421 bw ( KiB/s): min=21408, max=23864, per=99.84%, avg=23228.00, stdev=1213.62, samples=4 00:29:48.421 iops : min= 5352, max= 5966, avg=5807.00, stdev=303.41, samples=4 00:29:48.421 write: IOPS=5801, BW=22.7MiB/s (23.8MB/s)(45.5MiB/2008msec); 0 zone resets 00:29:48.421 slat (usec): min=2, max=163, avg= 2.98, stdev= 2.17 00:29:48.421 clat (usec): min=2527, max=16662, avg=9686.42, stdev=1032.35 00:29:48.421 lat (usec): min=2535, max=16665, avg=9689.40, stdev=1032.24 00:29:48.421 clat percentiles (usec): 00:29:48.421 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:48.421 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:29:48.421 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:29:48.421 | 99.00th=[13829], 99.50th=[15008], 99.90th=[16450], 99.95th=[16581], 00:29:48.421 | 99.99th=[16712] 00:29:48.421 bw ( KiB/s): min=22424, max=23616, per=99.87%, avg=23174.00, stdev=534.32, samples=4 00:29:48.421 iops : min= 5606, max= 5904, avg=5793.50, stdev=133.58, samples=4 00:29:48.421 lat (msec) : 4=0.05%, 10=34.41%, 20=65.48%, 50=0.06% 00:29:48.421 cpu : usr=53.91%, sys=41.41%, ctx=88, majf=0, minf=38 00:29:48.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:48.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:48.421 issued rwts: total=11679,11649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:48.421 00:29:48.421 Run status group 0 (all jobs): 00:29:48.421 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2008-2008msec 00:29:48.421 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.5MiB (47.7MB), run=2008-2008msec 00:29:48.421 06:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:48.679 06:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:48.679 06:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:52.867 06:25:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:52.867 06:25:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:56.157 06:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:56.157 06:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:58.084 06:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:58.084 rmmod nvme_tcp 00:29:58.084 rmmod nvme_fabrics 00:29:58.084 rmmod nvme_keyring 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1844668 ']' 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1844668 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1844668 ']' 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1844668 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:29:58.084 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1844668 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1844668' 00:29:58.085 killing process with pid 1844668 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1844668 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1844668 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.085 06:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:00.620 00:30:00.620 real 0m37.119s 00:30:00.620 user 2m21.023s 00:30:00.620 sys 0m7.499s 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.620 ************************************ 00:30:00.620 END TEST nvmf_fio_host 00:30:00.620 ************************************ 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.620 ************************************ 00:30:00.620 START TEST nvmf_failover 00:30:00.620 ************************************ 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:00.620 * Looking for test storage... 00:30:00.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.620 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:00.621 06:25:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.003 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:02.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:02.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:02.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:02.004 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:30:02.004 00:30:02.004 --- 10.0.0.2 ping statistics --- 00:30:02.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.004 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:30:02.004 00:30:02.004 --- 10.0.0.1 ping statistics --- 00:30:02.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.004 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.004 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1850737 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1850737 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1850737 ']' 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.263 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:02.263 [2024-07-23 06:25:55.402101] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:30:02.263 [2024-07-23 06:25:55.402186] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.263 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.263 [2024-07-23 06:25:55.439636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:02.263 [2024-07-23 06:25:55.471686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:02.263 [2024-07-23 06:25:55.562044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.263 [2024-07-23 06:25:55.562108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.263 [2024-07-23 06:25:55.562125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.263 [2024-07-23 06:25:55.562138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.263 [2024-07-23 06:25:55.562150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.263 [2024-07-23 06:25:55.562252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.263 [2024-07-23 06:25:55.562351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.263 [2024-07-23 06:25:55.562354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.522 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.522 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:02.522 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:02.522 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.522 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:02.522 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.522 06:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:02.780 [2024-07-23 06:25:55.972100] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.780 06:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:03.038 Malloc0 00:30:03.038 06:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.296 06:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.554 06:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.813 [2024-07-23 06:25:57.110202] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.813 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:04.071 [2024-07-23 06:25:57.395007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:04.071 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:04.329 [2024-07-23 06:25:57.635782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:04.329 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1851024 00:30:04.329 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:04.329 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.329 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1851024 /var/tmp/bdevperf.sock 00:30:04.329 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1851024 ']' 00:30:04.329 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.329 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.330 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.330 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.330 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:04.896 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:04.896 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:04.896 06:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.154 NVMe0n1 00:30:05.154 06:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.413 00:30:05.672 06:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1851162 00:30:05.672 06:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:05.672 06:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:06.610 06:25:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.869 [2024-07-23 06:26:00.004860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.004985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.869 [2024-07-23 06:26:00.005350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.005992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 [2024-07-23 06:26:00.006167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06480 is same with the state(5) to be set 00:30:06.870 06:26:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:10.169 06:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:10.169 00:30:10.169 06:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:10.734 06:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:14.018 06:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.018 [2024-07-23 06:26:07.058958] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.018 06:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:14.954 06:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:15.213 [2024-07-23 06:26:08.321962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.321999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 [2024-07-23 06:26:08.322415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa07ff0 is same with the state(5) to be set 00:30:15.213 06:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1851162 00:30:21.784 0 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1851024 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1851024 ']' 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1851024 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1851024 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1851024' 00:30:21.784 killing process with pid 1851024 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1851024 00:30:21.784 06:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1851024 00:30:21.784 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:21.784 [2024-07-23 06:25:57.698657] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:30:21.784 [2024-07-23 06:25:57.698739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851024 ] 00:30:21.784 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.784 [2024-07-23 06:25:57.730102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:21.784 [2024-07-23 06:25:57.758155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.784 [2024-07-23 06:25:57.844281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.784 Running I/O for 15 seconds... 00:30:21.784 [2024-07-23 06:26:00.007252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.007980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.007999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.784 [2024-07-23 06:26:00.008440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.784 [2024-07-23 06:26:00.008459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.785 [2024-07-23 06:26:00.008584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.008958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.008988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.009975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.009989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.010003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.785 [2024-07-23 06:26:00.010017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.010032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.785 [2024-07-23 06:26:00.010055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.010070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.785 [2024-07-23 06:26:00.010084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.010100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.785 [2024-07-23 06:26:00.010120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.010135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.785 [2024-07-23 06:26:00.010158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.010178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.785 [2024-07-23 06:26:00.010193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.785 [2024-07-23 06:26:00.010208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.010973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.010987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.786 [2024-07-23 06:26:00.011378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.786 [2024-07-23 06:26:00.011393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.787 [2024-07-23 06:26:00.011406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.787 [2024-07-23 06:26:00.011434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.787 [2024-07-23 06:26:00.011462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.787 [2024-07-23 06:26:00.011495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80544 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80552 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80560 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80568 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80576 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80584 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80592 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80600 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80608 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.011951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.011964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.011977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.011989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80616 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.012002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.012026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.012038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80624 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.012050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.012074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.012085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80632 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.012098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.012123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.012134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80640 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.012147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.012171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.012183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80648 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.012196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.012221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.012232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80656 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.012245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.787 [2024-07-23 06:26:00.012273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.787 [2024-07-23 06:26:00.012284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80664 len:8 PRP1 0x0 PRP2 0x0 00:30:21.787 [2024-07-23 06:26:00.012297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012360] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd6ebe0 was disconnected and freed. reset controller. 00:30:21.787 [2024-07-23 06:26:00.012378] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:21.787 [2024-07-23 06:26:00.012413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.787 [2024-07-23 06:26:00.012431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.787 [2024-07-23 06:26:00.012469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.787 [2024-07-23 06:26:00.012496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.787 [2024-07-23 06:26:00.012529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:00.012551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.787 [2024-07-23 06:26:00.012611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd41850 (9): Bad file descriptor 00:30:21.787 [2024-07-23 06:26:00.015922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.787 [2024-07-23 06:26:00.208306] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:21.787 [2024-07-23 06:26:03.761906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.787 [2024-07-23 06:26:03.761978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:03.762005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.787 [2024-07-23 06:26:03.762022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.787 [2024-07-23 06:26:03.762039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.787 [2024-07-23 06:26:03.762053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.788 [2024-07-23 06:26:03.762669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.762974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.762988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.763002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.788 [2024-07-23 06:26:03.763017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.788 [2024-07-23 06:26:03.763030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.789 [2024-07-23 06:26:03.763160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.763979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.763994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.764011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.764026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.764040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.764055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.764068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.764083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.764096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.764111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.764125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.764139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.764153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.764168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.789 [2024-07-23 06:26:03.764181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.789 [2024-07-23 06:26:03.764196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.764979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.764994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.790 [2024-07-23 06:26:03.765348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.790 [2024-07-23 06:26:03.765363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:03.765526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:03.765733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.791 [2024-07-23 06:26:03.765776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.791 [2024-07-23 06:26:03.765789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126904 len:8 PRP1 0x0 PRP2 0x0 00:30:21.791 [2024-07-23 06:26:03.765801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765860] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd70c10 was disconnected and freed. reset controller. 00:30:21.791 [2024-07-23 06:26:03.765878] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:21.791 [2024-07-23 06:26:03.765910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.791 [2024-07-23 06:26:03.765928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.791 [2024-07-23 06:26:03.765961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.765975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.791 [2024-07-23 06:26:03.765987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.766001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.791 [2024-07-23 06:26:03.766014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:03.766026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.791 [2024-07-23 06:26:03.769312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.791 [2024-07-23 06:26:03.769363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd41850 (9): Bad file descriptor 00:30:21.791 [2024-07-23 06:26:03.939367] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:21.791 [2024-07-23 06:26:08.321548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.791 [2024-07-23 06:26:08.321624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.321645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.791 [2024-07-23 06:26:08.321659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.321674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.791 [2024-07-23 06:26:08.321687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.321701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.791 [2024-07-23 06:26:08.321714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.321727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41850 is same with the state(5) to be set 00:30:21.791 [2024-07-23 06:26:08.324711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:08.324739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.324764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:08.324780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.324796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.791 [2024-07-23 06:26:08.324811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.324826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.324846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.324862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.324876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.324891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.324904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.324919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.324948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.324963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.324977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.324991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.325004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.325018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.325031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.325045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.325058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.325073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.325086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.325100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.325114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.325129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.791 [2024-07-23 06:26:08.325142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.791 [2024-07-23 06:26:08.325156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.325980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.325994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.792 [2024-07-23 06:26:08.326281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.792 [2024-07-23 06:26:08.326296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.326973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.326986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.327015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.327043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.327072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.327104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.327134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.327163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:21.793 [2024-07-23 06:26:08.327192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.793 [2024-07-23 06:26:08.327237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100304 len:8 PRP1 0x0 PRP2 0x0 00:30:21.793 [2024-07-23 06:26:08.327250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.793 [2024-07-23 06:26:08.327279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.793 [2024-07-23 06:26:08.327290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100312 len:8 PRP1 0x0 PRP2 0x0 00:30:21.793 [2024-07-23 06:26:08.327302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.793 [2024-07-23 06:26:08.327315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.793 [2024-07-23 06:26:08.327326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.793 [2024-07-23 06:26:08.327337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100320 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100328 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99616 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99624 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99632 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99640 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99648 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99656 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100336 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100344 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100352 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100360 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.327963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100368 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.327975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.327987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.327998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100376 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100384 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100400 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100408 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100416 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100424 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100432 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100440 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.794 [2024-07-23 06:26:08.328423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.794 [2024-07-23 06:26:08.328434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.794 [2024-07-23 06:26:08.328445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100448 len:8 PRP1 0x0 PRP2 0x0 00:30:21.794 [2024-07-23 06:26:08.328457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100456 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100464 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100472 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100480 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100488 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100520 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100528 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.328951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.328969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.328981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.328995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.329032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.329043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.329079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.329094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.329131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.329142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.329178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.329189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100568 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.329224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.329235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100576 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.329270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.329281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100584 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.329317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.329328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100592 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:21.795 [2024-07-23 06:26:08.329375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:21.795 [2024-07-23 06:26:08.329387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100600 len:8 PRP1 0x0 PRP2 0x0 00:30:21.795 [2024-07-23 06:26:08.329399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.795 [2024-07-23 06:26:08.329455] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd7dda0 was disconnected and freed. reset controller. 00:30:21.795 [2024-07-23 06:26:08.329472] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:21.795 [2024-07-23 06:26:08.329487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.795 [2024-07-23 06:26:08.332706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.795 [2024-07-23 06:26:08.332744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd41850 (9): Bad file descriptor 00:30:21.795 [2024-07-23 06:26:08.412289] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:21.795 00:30:21.795 Latency(us) 00:30:21.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.795 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:21.795 Verification LBA range: start 0x0 length 0x4000 00:30:21.795 NVMe0n1 : 15.00 8662.25 33.84 1145.62 0.00 13023.26 831.34 17573.36 00:30:21.795 =================================================================================================================== 00:30:21.795 Total : 8662.25 33.84 1145.62 0.00 13023.26 831.34 17573.36 00:30:21.795 Received shutdown signal, test time was about 15.000000 seconds 00:30:21.795 00:30:21.795 Latency(us) 00:30:21.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.795 =================================================================================================================== 00:30:21.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.795 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1852993 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1852993 /var/tmp/bdevperf.sock 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1852993 ']' 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:21.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:21.796 [2024-07-23 06:26:14.674324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:21.796 [2024-07-23 06:26:14.927014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:21.796 06:26:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.053 NVMe0n1 00:30:22.053 06:26:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.622 00:30:22.622 06:26:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.880 00:30:22.880 06:26:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:22.880 06:26:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:23.138 06:26:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:23.398 06:26:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:26.691 06:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:26.691 06:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:26.691 06:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1853662 00:30:26.691 06:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:26.691 06:26:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1853662 00:30:27.626 0 00:30:27.626 06:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:27.626 [2024-07-23 06:26:14.203067] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:30:27.626 [2024-07-23 06:26:14.203157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852993 ] 00:30:27.626 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.626 [2024-07-23 06:26:14.236010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:27.626 [2024-07-23 06:26:14.264722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.626 [2024-07-23 06:26:14.348204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.626 [2024-07-23 06:26:16.531249] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:27.626 [2024-07-23 06:26:16.531329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.626 [2024-07-23 06:26:16.531350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.626 [2024-07-23 06:26:16.531381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.626 [2024-07-23 06:26:16.531395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.626 [2024-07-23 06:26:16.531409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.626 [2024-07-23 06:26:16.531423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.626 [2024-07-23 06:26:16.531437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.626 [2024-07-23 06:26:16.531450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.626 [2024-07-23 06:26:16.531464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.626 [2024-07-23 06:26:16.531508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.626 [2024-07-23 06:26:16.531541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca9850 (9): Bad file descriptor 00:30:27.626 [2024-07-23 06:26:16.539814] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:27.626 Running I/O for 1 seconds... 00:30:27.626 00:30:27.626 Latency(us) 00:30:27.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.626 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:27.626 Verification LBA range: start 0x0 length 0x4000 00:30:27.626 NVMe0n1 : 1.01 9010.62 35.20 0.00 0.00 14148.79 3082.62 15146.10 00:30:27.626 =================================================================================================================== 00:30:27.626 Total : 9010.62 35.20 0.00 0.00 14148.79 3082.62 15146.10 00:30:27.626 06:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:27.626 06:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:27.885 06:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:28.452 06:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:28.452 06:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:28.452 06:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:28.710 06:26:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:31.999 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:31.999 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:31.999 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1852993 00:30:31.999 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1852993 ']' 00:30:31.999 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1852993 00:30:31.999 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:31.999 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:31.999 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1852993 00:30:32.259 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:32.259 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:32.259 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1852993' 00:30:32.259 killing process with pid 1852993 00:30:32.259 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1852993 00:30:32.259 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1852993 00:30:32.259 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:32.259 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:32.517 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:32.517 rmmod nvme_tcp 00:30:32.517 rmmod nvme_fabrics 00:30:32.775 rmmod nvme_keyring 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1850737 ']' 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1850737 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1850737 ']' 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1850737 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1850737 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1850737' 00:30:32.775 killing process with pid 1850737 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1850737 00:30:32.775 06:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1850737 00:30:33.033 06:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:33.033 06:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:33.033 06:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:33.033 06:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:33.033 06:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:33.033 06:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.033 06:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.033 06:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:34.938 00:30:34.938 real 0m34.790s 00:30:34.938 user 2m3.720s 00:30:34.938 sys 0m5.573s 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:34.938 ************************************ 00:30:34.938 END TEST nvmf_failover 00:30:34.938 ************************************ 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.938 ************************************ 00:30:34.938 START TEST nvmf_host_discovery 00:30:34.938 ************************************ 00:30:34.938 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:35.197 * Looking for test storage... 00:30:35.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:35.197 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.197 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:35.197 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.197 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.197 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.197 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.197 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.197 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:35.198 06:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.105 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:37.105 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:37.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:37.106 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:37.106 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:37.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:30:37.106 00:30:37.106 --- 10.0.0.2 ping statistics --- 00:30:37.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.106 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:30:37.106 00:30:37.106 --- 10.0.0.1 ping statistics --- 00:30:37.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.106 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1856254 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1856254 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1856254 ']' 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.106 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.106 [2024-07-23 06:26:30.362041] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:30:37.106 [2024-07-23 06:26:30.362126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.106 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.106 [2024-07-23 06:26:30.398178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:37.106 [2024-07-23 06:26:30.427410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.365 [2024-07-23 06:26:30.512516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.365 [2024-07-23 06:26:30.512571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.365 [2024-07-23 06:26:30.512590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.365 [2024-07-23 06:26:30.512623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.365 [2024-07-23 06:26:30.512634] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.365 [2024-07-23 06:26:30.512685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.365 [2024-07-23 06:26:30.647377] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.365 [2024-07-23 06:26:30.655577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.365 null0 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.365 null1 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1856298 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1856298 /tmp/host.sock 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1856298 ']' 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:37.365 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:37.365 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.366 06:26:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.626 [2024-07-23 06:26:30.733776] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:30:37.626 [2024-07-23 06:26:30.733858] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856298 ] 00:30:37.626 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.626 [2024-07-23 06:26:30.769081] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:37.626 [2024-07-23 06:26:30.800233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.626 [2024-07-23 06:26:30.892383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:37.885 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.144 [2024-07-23 06:26:31.305361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:30:38.144 06:26:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:39.084 [2024-07-23 06:26:32.076481] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:39.084 [2024-07-23 06:26:32.076532] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:39.084 [2024-07-23 06:26:32.076558] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:39.084 [2024-07-23 06:26:32.163848] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:39.084 [2024-07-23 06:26:32.389002] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:39.084 [2024-07-23 06:26:32.389039] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.343 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:39.344 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.603 [2024-07-23 06:26:32.761621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:39.603 [2024-07-23 06:26:32.762718] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:39.603 [2024-07-23 06:26:32.762755] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.603 [2024-07-23 06:26:32.890649] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:39.603 06:26:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:39.863 [2024-07-23 06:26:32.989368] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:39.863 [2024-07-23 06:26:32.989394] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:39.863 [2024-07-23 06:26:32.989405] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:40.810 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.811 [2024-07-23 06:26:33.993587] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:40.811 [2024-07-23 06:26:33.993633] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:40.811 [2024-07-23 06:26:34.000180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.811 [2024-07-23 06:26:34.000213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.811 [2024-07-23 06:26:34.000233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.811 [2024-07-23 06:26:34.000252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:40.811 [2024-07-23 06:26:34.000266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.811 [2024-07-23 06:26:34.000284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.811 [2024-07-23 06:26:34.000298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.811 [2024-07-23 06:26:34.000310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.811 [2024-07-23 06:26:34.000323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.811 06:26:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:40.811 [2024-07-23 06:26:34.010175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.811 [2024-07-23 06:26:34.020217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.811 [2024-07-23 06:26:34.020471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.811 [2024-07-23 06:26:34.020506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.811 [2024-07-23 06:26:34.020524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.811 [2024-07-23 06:26:34.020547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.811 [2024-07-23 06:26:34.020568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.811 [2024-07-23 06:26:34.020583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.811 [2024-07-23 06:26:34.020598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.811 [2024-07-23 06:26:34.020625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.811 [2024-07-23 06:26:34.030306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.811 [2024-07-23 06:26:34.030537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.811 [2024-07-23 06:26:34.030565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.811 [2024-07-23 06:26:34.030581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.811 [2024-07-23 06:26:34.030603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.811 [2024-07-23 06:26:34.030633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.811 [2024-07-23 06:26:34.030648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.811 [2024-07-23 06:26:34.030661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.811 [2024-07-23 06:26:34.030680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.811 [2024-07-23 06:26:34.040388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.811 [2024-07-23 06:26:34.040673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.811 [2024-07-23 06:26:34.040702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.811 [2024-07-23 06:26:34.040719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.811 [2024-07-23 06:26:34.040741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.811 [2024-07-23 06:26:34.040776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.811 [2024-07-23 06:26:34.040794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.811 [2024-07-23 06:26:34.040807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.811 [2024-07-23 06:26:34.040826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:40.811 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:40.811 [2024-07-23 06:26:34.050473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.811 [2024-07-23 06:26:34.050734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.811 [2024-07-23 06:26:34.050763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.811 [2024-07-23 06:26:34.050780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.811 [2024-07-23 06:26:34.050802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.811 [2024-07-23 06:26:34.050822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.811 [2024-07-23 06:26:34.050836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.811 [2024-07-23 06:26:34.050849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.811 [2024-07-23 06:26:34.050868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.811 [2024-07-23 06:26:34.060559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.811 [2024-07-23 06:26:34.060789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.811 [2024-07-23 06:26:34.060818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.811 [2024-07-23 06:26:34.060834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.811 [2024-07-23 06:26:34.060857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.811 [2024-07-23 06:26:34.060877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.811 [2024-07-23 06:26:34.060890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.811 [2024-07-23 06:26:34.060904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.811 [2024-07-23 06:26:34.060922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.811 [2024-07-23 06:26:34.070644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.811 [2024-07-23 06:26:34.070859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.811 [2024-07-23 06:26:34.070886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.811 [2024-07-23 06:26:34.070902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.811 [2024-07-23 06:26:34.070924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.811 [2024-07-23 06:26:34.070944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.812 [2024-07-23 06:26:34.070962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.812 [2024-07-23 06:26:34.070976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.812 [2024-07-23 06:26:34.070994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.812 [2024-07-23 06:26:34.080716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.812 [2024-07-23 06:26:34.080960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.812 [2024-07-23 06:26:34.080988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.812 [2024-07-23 06:26:34.081005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.812 [2024-07-23 06:26:34.081026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.812 [2024-07-23 06:26:34.081046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.812 [2024-07-23 06:26:34.081059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.812 [2024-07-23 06:26:34.081073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.812 [2024-07-23 06:26:34.081092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:40.812 [2024-07-23 06:26:34.090807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.812 [2024-07-23 06:26:34.091025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.812 [2024-07-23 06:26:34.091065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.812 [2024-07-23 06:26:34.091081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.812 [2024-07-23 06:26:34.091103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.812 [2024-07-23 06:26:34.091125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.812 [2024-07-23 06:26:34.091139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.812 [2024-07-23 06:26:34.091157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.812 [2024-07-23 06:26:34.091178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.812 [2024-07-23 06:26:34.100916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.812 [2024-07-23 06:26:34.101143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.812 [2024-07-23 06:26:34.101172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.812 [2024-07-23 06:26:34.101188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.812 [2024-07-23 06:26:34.101210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.812 [2024-07-23 06:26:34.101229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.812 [2024-07-23 06:26:34.101243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.812 [2024-07-23 06:26:34.101256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.812 [2024-07-23 06:26:34.101288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.812 [2024-07-23 06:26:34.110986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.812 [2024-07-23 06:26:34.111258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.812 [2024-07-23 06:26:34.111286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.812 [2024-07-23 06:26:34.111302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.812 [2024-07-23 06:26:34.111323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.812 [2024-07-23 06:26:34.111358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.812 [2024-07-23 06:26:34.111375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.812 [2024-07-23 06:26:34.111389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.812 [2024-07-23 06:26:34.111407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.812 [2024-07-23 06:26:34.121067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:40.812 [2024-07-23 06:26:34.121293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.812 [2024-07-23 06:26:34.121320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b36e0 with addr=10.0.0.2, port=4420 00:30:40.812 [2024-07-23 06:26:34.121335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b36e0 is same with the state(5) to be set 00:30:40.812 [2024-07-23 06:26:34.121356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b36e0 (9): Bad file descriptor 00:30:40.812 [2024-07-23 06:26:34.121404] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:40.812 [2024-07-23 06:26:34.121429] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:40.812 [2024-07-23 06:26:34.121461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:40.812 [2024-07-23 06:26:34.121485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:40.812 [2024-07-23 06:26:34.121500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:40.812 [2024-07-23 06:26:34.121535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:30:40.812 06:26:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.196 06:26:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.136 [2024-07-23 06:26:36.417838] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:43.136 [2024-07-23 06:26:36.417898] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:43.136 [2024-07-23 06:26:36.417921] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:43.394 [2024-07-23 06:26:36.504175] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:43.394 [2024-07-23 06:26:36.611669] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:43.394 [2024-07-23 06:26:36.611714] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.394 request: 00:30:43.394 { 00:30:43.394 "name": "nvme", 00:30:43.394 "trtype": "tcp", 00:30:43.394 "traddr": "10.0.0.2", 00:30:43.394 "adrfam": "ipv4", 00:30:43.394 "trsvcid": "8009", 00:30:43.394 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:43.394 "wait_for_attach": true, 00:30:43.394 "method": "bdev_nvme_start_discovery", 00:30:43.394 "req_id": 1 00:30:43.394 } 00:30:43.394 Got JSON-RPC error response 00:30:43.394 response: 00:30:43.394 { 00:30:43.394 "code": -17, 00:30:43.394 "message": "File exists" 00:30:43.394 } 00:30:43.394 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.395 request: 00:30:43.395 { 00:30:43.395 "name": "nvme_second", 00:30:43.395 "trtype": "tcp", 00:30:43.395 "traddr": "10.0.0.2", 00:30:43.395 "adrfam": "ipv4", 00:30:43.395 "trsvcid": "8009", 00:30:43.395 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:43.395 "wait_for_attach": true, 00:30:43.395 "method": "bdev_nvme_start_discovery", 00:30:43.395 "req_id": 1 00:30:43.395 } 00:30:43.395 Got JSON-RPC error response 00:30:43.395 response: 00:30:43.395 { 00:30:43.395 "code": -17, 00:30:43.395 "message": "File exists" 00:30:43.395 } 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:43.395 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.655 06:26:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.595 [2024-07-23 06:26:37.827352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.595 [2024-07-23 06:26:37.827418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f1b80 with addr=10.0.0.2, port=8010 00:30:44.595 [2024-07-23 06:26:37.827458] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:44.595 [2024-07-23 06:26:37.827474] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:44.595 [2024-07-23 06:26:37.827488] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:45.534 [2024-07-23 06:26:38.829852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.534 [2024-07-23 06:26:38.829920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f1b80 with addr=10.0.0.2, port=8010 00:30:45.534 [2024-07-23 06:26:38.829949] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:45.534 [2024-07-23 06:26:38.829964] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:45.534 [2024-07-23 06:26:38.829977] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:46.916 [2024-07-23 06:26:39.831964] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:46.916 request: 00:30:46.916 { 00:30:46.916 "name": "nvme_second", 00:30:46.916 "trtype": "tcp", 00:30:46.916 "traddr": "10.0.0.2", 00:30:46.916 "adrfam": "ipv4", 00:30:46.916 "trsvcid": "8010", 00:30:46.916 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:46.916 "wait_for_attach": false, 00:30:46.916 "attach_timeout_ms": 3000, 00:30:46.916 "method": "bdev_nvme_start_discovery", 00:30:46.916 "req_id": 1 00:30:46.916 } 00:30:46.916 Got JSON-RPC error response 00:30:46.916 response: 00:30:46.916 { 00:30:46.916 "code": -110, 00:30:46.916 "message": "Connection timed out" 00:30:46.916 } 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.916 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1856298 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:46.917 rmmod nvme_tcp 00:30:46.917 rmmod nvme_fabrics 00:30:46.917 rmmod nvme_keyring 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1856254 ']' 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1856254 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1856254 ']' 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1856254 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1856254 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1856254' 00:30:46.917 killing process with pid 1856254 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1856254 00:30:46.917 06:26:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1856254 00:30:46.917 06:26:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:46.917 06:26:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:46.917 06:26:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:46.917 06:26:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:46.917 06:26:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:46.917 06:26:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.917 06:26:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.917 06:26:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:49.455 00:30:49.455 real 0m14.002s 00:30:49.455 user 0m20.930s 00:30:49.455 sys 0m2.807s 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.455 ************************************ 00:30:49.455 END TEST nvmf_host_discovery 00:30:49.455 ************************************ 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.455 ************************************ 00:30:49.455 START TEST nvmf_host_multipath_status 00:30:49.455 ************************************ 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:49.455 * Looking for test storage... 00:30:49.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:49.455 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:49.456 06:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:51.360 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:51.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:51.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:51.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:51.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:51.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:30:51.361 00:30:51.361 --- 10.0.0.2 ping statistics --- 00:30:51.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.361 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:30:51.361 00:30:51.361 --- 10.0.0.1 ping statistics --- 00:30:51.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.361 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:51.361 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1859450 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1859450 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1859450 ']' 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:51.362 [2024-07-23 06:26:44.439079] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:30:51.362 [2024-07-23 06:26:44.439187] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.362 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.362 [2024-07-23 06:26:44.478488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:51.362 [2024-07-23 06:26:44.506754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:51.362 [2024-07-23 06:26:44.593160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.362 [2024-07-23 06:26:44.593205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.362 [2024-07-23 06:26:44.593233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.362 [2024-07-23 06:26:44.593244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.362 [2024-07-23 06:26:44.593253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.362 [2024-07-23 06:26:44.593337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.362 [2024-07-23 06:26:44.593341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:51.362 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:51.651 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.651 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1859450 00:30:51.651 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:51.651 [2024-07-23 06:26:44.946529] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.651 06:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:51.909 Malloc0 00:30:51.909 06:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:52.167 06:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.424 06:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.681 [2024-07-23 06:26:45.978482] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.681 06:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:52.938 [2024-07-23 06:26:46.219167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1859720 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1859720 /var/tmp/bdevperf.sock 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1859720 ']' 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:52.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.938 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:53.197 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:53.197 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:53.197 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:53.767 06:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:54.025 Nvme0n1 00:30:54.025 06:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:54.593 Nvme0n1 00:30:54.593 06:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:54.593 06:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:56.508 06:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:56.508 06:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:57.077 06:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:57.077 06:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:58.451 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:58.451 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:58.451 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.451 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:58.451 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.451 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:58.451 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.451 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:58.709 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:58.709 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:58.709 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.709 06:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:58.967 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.967 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:58.967 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.967 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:59.226 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.226 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:59.226 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.226 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:59.484 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.484 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:59.484 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.484 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:59.742 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.742 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:59.742 06:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:00.001 06:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:00.258 06:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:01.197 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:01.197 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:01.197 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.197 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:01.455 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:01.455 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:01.455 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.455 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:01.713 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.713 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:01.714 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.714 06:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:01.972 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.972 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:01.972 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.972 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:02.231 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.231 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:02.231 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.231 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:02.489 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.489 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:02.489 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.489 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:02.747 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.747 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:02.747 06:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:03.005 06:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:03.264 06:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:04.200 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:04.200 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:04.200 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.200 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:04.458 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.458 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:04.458 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.458 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:04.717 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:04.717 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:04.717 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.717 06:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:04.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:04.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:05.233 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.233 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:05.233 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.233 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:05.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:05.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:05.750 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.750 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:05.750 06:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:06.008 06:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:06.265 06:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:07.201 06:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:07.201 06:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:07.201 06:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.202 06:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:07.459 06:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.459 06:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:07.459 06:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.460 06:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:07.718 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:07.718 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:07.718 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.718 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:07.976 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.976 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:07.976 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.976 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:08.234 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.234 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:08.234 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.234 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:08.493 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.493 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:08.493 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.493 06:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:08.751 06:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:08.751 06:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:08.751 06:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:09.009 06:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:09.269 06:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:10.644 06:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:10.644 06:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:10.644 06:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.644 06:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:10.644 06:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:10.644 06:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:10.644 06:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.644 06:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:10.902 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:10.902 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:10.902 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.902 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:11.161 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.161 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:11.161 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.161 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:11.419 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.419 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:11.419 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.419 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:11.710 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:11.710 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:11.710 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.710 06:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:11.975 06:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:11.975 06:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:11.975 06:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:12.233 06:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:12.491 06:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:13.430 06:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:13.430 06:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:13.430 06:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.430 06:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:13.688 06:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:13.688 06:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:13.688 06:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.688 06:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:13.946 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.946 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:13.946 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.946 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:14.204 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.204 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:14.204 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.204 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:14.462 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.462 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:14.462 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.462 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:14.720 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:14.720 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:14.720 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.720 06:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:14.978 06:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.978 06:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:15.236 06:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:15.236 06:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:15.494 06:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:15.753 06:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:16.687 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:16.687 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:16.687 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.687 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:16.947 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.947 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:17.206 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.206 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.206 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.206 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.206 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.206 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:17.464 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.464 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:17.464 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.464 06:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:17.722 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.722 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:17.722 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.722 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:17.980 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.980 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:17.980 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.981 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:18.239 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.239 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:18.239 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:18.497 06:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:18.755 06:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:20.140 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:20.140 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:20.140 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.140 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:20.140 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.140 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:20.140 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.140 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:20.405 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.405 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:20.405 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.405 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:20.662 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.662 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:20.662 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.662 06:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:20.921 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.921 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:20.921 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.921 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:21.179 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.179 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:21.179 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.179 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:21.437 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.437 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:21.437 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:21.695 06:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:21.955 06:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:22.898 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:22.898 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:22.898 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.898 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:23.156 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.156 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:23.156 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.156 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:23.414 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.414 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:23.414 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.414 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:23.673 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.673 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:23.673 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.673 06:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:23.931 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.931 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:23.931 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.931 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:24.189 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.189 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:24.189 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.189 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:24.447 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.447 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:24.447 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:24.705 06:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:24.962 06:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:25.901 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:25.901 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:25.901 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.901 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:26.159 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.159 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:26.159 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.159 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:26.418 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:26.418 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:26.418 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.418 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:26.676 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.676 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:26.676 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.676 06:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:26.934 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.935 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:26.935 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.935 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:27.193 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.193 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:27.193 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.193 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1859720 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1859720 ']' 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1859720 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1859720 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1859720' 00:31:27.452 killing process with pid 1859720 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1859720 00:31:27.452 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1859720 00:31:27.452 Connection closed with partial response: 00:31:27.452 00:31:27.452 00:31:27.727 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1859720 00:31:27.727 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:27.727 [2024-07-23 06:26:46.283303] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:31:27.728 [2024-07-23 06:26:46.283388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859720 ] 00:31:27.728 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.728 [2024-07-23 06:26:46.315646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:27.728 [2024-07-23 06:26:46.343258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.728 [2024-07-23 06:26:46.430230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.728 Running I/O for 90 seconds... 00:31:27.728 [2024-07-23 06:27:02.296766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.296846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.296904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.296931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.296966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.297945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.297981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.298956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.298986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.299027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.299053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.299088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.299119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:27.728 [2024-07-23 06:27:02.299155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.728 [2024-07-23 06:27:02.299180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.299980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.299996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.729 [2024-07-23 06:27:02.300941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.729 [2024-07-23 06:27:02.300963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.300978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.301091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.301127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.301164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.301200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.301237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.301281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.301321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.301983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.301999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.302020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.302035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.303226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.303298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.303360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.303423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.303499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.303558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.730 [2024-07-23 06:27:02.303639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.303680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.303718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:27.730 [2024-07-23 06:27:02.303740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.730 [2024-07-23 06:27:02.303755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.303777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.303793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.303815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.303830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.303852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.303868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.303889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.303905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.303941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.303957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.303978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.303993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.731 [2024-07-23 06:27:02.304424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.304952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.304979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.731 [2024-07-23 06:27:02.305475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.731 [2024-07-23 06:27:02.305490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.305852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.305868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.306667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.306697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.306738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.306765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.306801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.306827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.306863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.306888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.306923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.306957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.306980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.306995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.732 [2024-07-23 06:27:02.307561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:27.732 [2024-07-23 06:27:02.307582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.307973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.307989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.308729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.733 [2024-07-23 06:27:02.308766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.733 [2024-07-23 06:27:02.308809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.733 [2024-07-23 06:27:02.308847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.733 [2024-07-23 06:27:02.308885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.733 [2024-07-23 06:27:02.308937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.733 [2024-07-23 06:27:02.308975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.308996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.733 [2024-07-23 06:27:02.309012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.309033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.309048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.309069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.309085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.309106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.309122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.309143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.309158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.733 [2024-07-23 06:27:02.309179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.733 [2024-07-23 06:27:02.309194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.309639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.309659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.310861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.310894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.310936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.310968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.311044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.311104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.311164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.311227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.311284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.311325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.734 [2024-07-23 06:27:02.311686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.311949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.311973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.312007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.312032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.312068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.312093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:27.734 [2024-07-23 06:27:02.312134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.734 [2024-07-23 06:27:02.312160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.312973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.312994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.313472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.313487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.314335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.314404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.314466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.314528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.314591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.314665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.314706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.735 [2024-07-23 06:27:02.314744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:27.735 [2024-07-23 06:27:02.314765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.314781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.314803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.314818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.314840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.314857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.314891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.314918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.314968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.314994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.315984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.315999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.736 [2024-07-23 06:27:02.316432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.736 [2024-07-23 06:27:02.316452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.737 [2024-07-23 06:27:02.316467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.737 [2024-07-23 06:27:02.316501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.737 [2024-07-23 06:27:02.316538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.737 [2024-07-23 06:27:02.316573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.737 [2024-07-23 06:27:02.316635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.737 [2024-07-23 06:27:02.316677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.737 [2024-07-23 06:27:02.316715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.316758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.316799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.316837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.316875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.316932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.316954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.316984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.317005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.317021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.317041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.317056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.317076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.317092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.317113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.317127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.317148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.317163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.317183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.317199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.317219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.737 [2024-07-23 06:27:02.317234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:27.737 [2024-07-23 06:27:02.317258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.317274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.317295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.317309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.317330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.317345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.318950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.318976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.318992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.319283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.738 [2024-07-23 06:27:02.319674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.319711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.319748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.319786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.319838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.319900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.319949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.319978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.320013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.320037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:27.738 [2024-07-23 06:27:02.320070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.738 [2024-07-23 06:27:02.320094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.320957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.320978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.321008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.321029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.321044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.321906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.321938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.321985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.739 [2024-07-23 06:27:02.322731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:27.739 [2024-07-23 06:27:02.322753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.322770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.322791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.322807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.322829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.322845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.322868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.322883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.322921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.322938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.322974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.322991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.323959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.323989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.324025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.324061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.740 [2024-07-23 06:27:02.324097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.740 [2024-07-23 06:27:02.324133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.740 [2024-07-23 06:27:02.324172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.740 [2024-07-23 06:27:02.324208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.740 [2024-07-23 06:27:02.324242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.740 [2024-07-23 06:27:02.324277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.740 [2024-07-23 06:27:02.324312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.740 [2024-07-23 06:27:02.324348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.740 [2024-07-23 06:27:02.324368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.324914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.324930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.326622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.741 [2024-07-23 06:27:02.326679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.741 [2024-07-23 06:27:02.326718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.741 [2024-07-23 06:27:02.326755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.741 [2024-07-23 06:27:02.326792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.741 [2024-07-23 06:27:02.326829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.741 [2024-07-23 06:27:02.326866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.741 [2024-07-23 06:27:02.326917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.741 [2024-07-23 06:27:02.326975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.326996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.741 [2024-07-23 06:27:02.327011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.741 [2024-07-23 06:27:02.327031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.742 [2024-07-23 06:27:02.327360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.327975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.327991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:27.742 [2024-07-23 06:27:02.328688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.742 [2024-07-23 06:27:02.328704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.329983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.329998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.330929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.330955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.331001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.331025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.331058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.331082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.331111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.331131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:27.743 [2024-07-23 06:27:02.331152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.743 [2024-07-23 06:27:02.331167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.331717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.744 [2024-07-23 06:27:02.331756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.744 [2024-07-23 06:27:02.331795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.744 [2024-07-23 06:27:02.331833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.744 [2024-07-23 06:27:02.331871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.744 [2024-07-23 06:27:02.331925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.744 [2024-07-23 06:27:02.331962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.331997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.744 [2024-07-23 06:27:02.332012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.332494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.744 [2024-07-23 06:27:02.332509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:27.744 [2024-07-23 06:27:02.333772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.333805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.333848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.333875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.333927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.333953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.333988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.334714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.334976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.334998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.335013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.335048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.335098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.745 [2024-07-23 06:27:02.335159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.335219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.335278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.335339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.335398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.745 [2024-07-23 06:27:02.335493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:27.745 [2024-07-23 06:27:02.335529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.335976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.335997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.336350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.336365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.337189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.337221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.337263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.337290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.337326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.337353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.337388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.337415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.337451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.337497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.337526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.337544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.337565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.746 [2024-07-23 06:27:02.337580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:27.746 [2024-07-23 06:27:02.337601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.337638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.337663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.337680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.337701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.337716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.337736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.337751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.337772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.337794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.337828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.337853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.337888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.337914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.337961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.337985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.338971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.338992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.339007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.339027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.339041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.339068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.339084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.339105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.339119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.339140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.747 [2024-07-23 06:27:02.339154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.747 [2024-07-23 06:27:02.339174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.748 [2024-07-23 06:27:02.339400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.748 [2024-07-23 06:27:02.339435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.748 [2024-07-23 06:27:02.339470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.748 [2024-07-23 06:27:02.339508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.748 [2024-07-23 06:27:02.339544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.748 [2024-07-23 06:27:02.339579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.748 [2024-07-23 06:27:02.339639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.339959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.339974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.340010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.340029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.340050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.340066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.340086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.340101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.340121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.340137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.340159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.340174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.748 [2024-07-23 06:27:02.341830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:27.748 [2024-07-23 06:27:02.341851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.748 [2024-07-23 06:27:02.341866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.341887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.341902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.341938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.341954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.341975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.341990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.342182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.749 [2024-07-23 06:27:02.342567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.342603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.342670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.342736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.342801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.342864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.342941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.342989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.749 [2024-07-23 06:27:02.343697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.749 [2024-07-23 06:27:02.343714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.343742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.343758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.343781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.343799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.343822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.343838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.343860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.343876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.343899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.343914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.344749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.344781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.344824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.344850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.344886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.344912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.344948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.344974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.345971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.345987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.346009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.346025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.346047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.346062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.346084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.346108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.346144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.346183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.346233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.346260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.346296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.346319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:27.750 [2024-07-23 06:27:02.346344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.750 [2024-07-23 06:27:02.346360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.346977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.346992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.751 [2024-07-23 06:27:02.347120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.751 [2024-07-23 06:27:02.347155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.751 [2024-07-23 06:27:02.347191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.751 [2024-07-23 06:27:02.347227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.751 [2024-07-23 06:27:02.347263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.751 [2024-07-23 06:27:02.347315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.751 [2024-07-23 06:27:02.347369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.751 [2024-07-23 06:27:02.347653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:27.751 [2024-07-23 06:27:02.347675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.347691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.347713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.347729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.347751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.347766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.347788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.347804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.347826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.347842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.348981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.349646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.349692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.349734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.349773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.349811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.349849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.349892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.349929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.349967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.752 [2024-07-23 06:27:02.350417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.752 [2024-07-23 06:27:02.350478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:27.752 [2024-07-23 06:27:02.350513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.350981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.350997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.362750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.362766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.753 [2024-07-23 06:27:02.363777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:27.753 [2024-07-23 06:27:02.363803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.363819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.363845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.363861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.363887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.363903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.363929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.363945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.363971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.363988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.364965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.364981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.754 [2024-07-23 06:27:02.365412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.754 [2024-07-23 06:27:02.365454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.754 [2024-07-23 06:27:02.365480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.754 [2024-07-23 06:27:02.365496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:02.365538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:02.365580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:02.365636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:02.365681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:02.365724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.365766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.365808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.365850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.365892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.365934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.365960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.365976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.366018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.366034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.366074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.366091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.366118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.366134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.366160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.366179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.366207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.366224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:02.366407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:02.366429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.067868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:18.067935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:18.070509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:18.070571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:18.070647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:18.070708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:18.070790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:18.070831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:18.070870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:18.070946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.070980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:18.071000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.071032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:18.071063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.071085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.755 [2024-07-23 06:27:18.071099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.072079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:18.072112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.072151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:18.072198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.072237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:18.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:27.755 [2024-07-23 06:27:18.072317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.755 [2024-07-23 06:27:18.072345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:27.756 [2024-07-23 06:27:18.072383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:27.756 [2024-07-23 06:27:18.072409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:27.756 Received shutdown signal, test time was about 32.643939 seconds 00:31:27.756 00:31:27.756 Latency(us) 00:31:27.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.756 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:27.756 Verification LBA range: start 0x0 length 0x4000 00:31:27.756 Nvme0n1 : 32.64 7971.93 31.14 0.00 0.00 16029.50 776.72 4101097.24 00:31:27.756 =================================================================================================================== 00:31:27.756 Total : 7971.93 31.14 0.00 0.00 16029.50 776.72 4101097.24 00:31:27.756 06:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:28.016 rmmod nvme_tcp 00:31:28.016 rmmod nvme_fabrics 00:31:28.016 rmmod nvme_keyring 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1859450 ']' 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1859450 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1859450 ']' 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1859450 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1859450 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1859450' 00:31:28.016 killing process with pid 1859450 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1859450 00:31:28.016 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1859450 00:31:28.277 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:28.277 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:28.277 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:28.277 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:28.277 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:28.277 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.277 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.277 06:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.183 06:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:30.183 00:31:30.183 real 0m41.239s 00:31:30.183 user 2m2.412s 00:31:30.183 sys 0m11.722s 00:31:30.183 06:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:30.183 06:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:30.183 ************************************ 00:31:30.183 END TEST nvmf_host_multipath_status 00:31:30.183 ************************************ 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.441 ************************************ 00:31:30.441 START TEST nvmf_discovery_remove_ifc 00:31:30.441 ************************************ 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:30.441 * Looking for test storage... 00:31:30.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.441 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:30.442 06:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.347 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:32.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:32.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:32.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:32.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.348 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:31:32.607 00:31:32.607 --- 10.0.0.2 ping statistics --- 00:31:32.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.607 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:31:32.607 00:31:32.607 --- 10.0.0.1 ping statistics --- 00:31:32.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.607 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1866528 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1866528 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1866528 ']' 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:32.607 06:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.607 [2024-07-23 06:27:25.857636] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:31:32.607 [2024-07-23 06:27:25.857717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.607 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.607 [2024-07-23 06:27:25.893278] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:32.607 [2024-07-23 06:27:25.924876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.867 [2024-07-23 06:27:26.017016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.867 [2024-07-23 06:27:26.017074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.867 [2024-07-23 06:27:26.017088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.867 [2024-07-23 06:27:26.017099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.867 [2024-07-23 06:27:26.017123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.867 [2024-07-23 06:27:26.017148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.867 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.867 [2024-07-23 06:27:26.164509] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.867 [2024-07-23 06:27:26.172721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:32.867 null0 00:31:32.867 [2024-07-23 06:27:26.204653] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1866675 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1866675 /tmp/host.sock 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1866675 ']' 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:33.126 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.126 [2024-07-23 06:27:26.269541] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:31:33.126 [2024-07-23 06:27:26.269637] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866675 ] 00:31:33.126 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.126 [2024-07-23 06:27:26.301871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:33.126 [2024-07-23 06:27:26.331924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.126 [2024-07-23 06:27:26.422347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.126 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.384 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.384 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:33.384 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.384 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.384 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.384 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:33.384 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.384 06:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.321 [2024-07-23 06:27:27.623997] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:34.321 [2024-07-23 06:27:27.624028] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:34.321 [2024-07-23 06:27:27.624053] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:34.580 [2024-07-23 06:27:27.710315] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:34.580 [2024-07-23 06:27:27.773916] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:34.580 [2024-07-23 06:27:27.773989] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:34.580 [2024-07-23 06:27:27.774032] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:34.580 [2024-07-23 06:27:27.774063] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:34.580 [2024-07-23 06:27:27.774090] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.580 [2024-07-23 06:27:27.780994] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10f4370 was disconnected and freed. delete nvme_qpair. 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:34.580 06:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:35.956 06:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:36.921 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.921 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.921 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.921 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.922 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.922 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.922 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.922 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.922 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:36.922 06:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:37.860 06:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:37.860 06:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.860 06:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:37.860 06:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.860 06:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.860 06:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:37.860 06:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:37.860 06:27:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.860 06:27:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:37.860 06:27:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:38.797 06:27:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:40.178 06:27:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:40.178 [2024-07-23 06:27:33.215275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:40.178 [2024-07-23 06:27:33.215347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.178 [2024-07-23 06:27:33.215369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.178 [2024-07-23 06:27:33.215389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.178 [2024-07-23 06:27:33.215404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.178 [2024-07-23 06:27:33.215420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.178 [2024-07-23 06:27:33.215436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.178 [2024-07-23 06:27:33.215451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.178 [2024-07-23 06:27:33.215466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.178 [2024-07-23 06:27:33.215482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.178 [2024-07-23 06:27:33.215497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.178 [2024-07-23 06:27:33.215512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bad70 is same with the state(5) to be set 00:31:40.178 [2024-07-23 06:27:33.225292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bad70 (9): Bad file descriptor 00:31:40.178 [2024-07-23 06:27:33.235340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:41.117 [2024-07-23 06:27:34.280650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:41.117 [2024-07-23 06:27:34.280728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bad70 with addr=10.0.0.2, port=4420 00:31:41.117 [2024-07-23 06:27:34.280753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bad70 is same with the state(5) to be set 00:31:41.117 [2024-07-23 06:27:34.280796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bad70 (9): Bad file descriptor 00:31:41.117 [2024-07-23 06:27:34.281201] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:41.117 [2024-07-23 06:27:34.281231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:41.117 [2024-07-23 06:27:34.281246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:41.117 [2024-07-23 06:27:34.281261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:41.117 [2024-07-23 06:27:34.281288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:41.117 [2024-07-23 06:27:34.281305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:41.117 06:27:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:42.052 [2024-07-23 06:27:35.283805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:42.052 [2024-07-23 06:27:35.283844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:42.052 [2024-07-23 06:27:35.283875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:42.052 [2024-07-23 06:27:35.283890] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:42.052 [2024-07-23 06:27:35.283933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.052 [2024-07-23 06:27:35.283969] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:42.052 [2024-07-23 06:27:35.284034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.052 [2024-07-23 06:27:35.284056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.052 [2024-07-23 06:27:35.284076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.053 [2024-07-23 06:27:35.284091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.053 [2024-07-23 06:27:35.284107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.053 [2024-07-23 06:27:35.284121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.053 [2024-07-23 06:27:35.284137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.053 [2024-07-23 06:27:35.284151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.053 [2024-07-23 06:27:35.284167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.053 [2024-07-23 06:27:35.284182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.053 [2024-07-23 06:27:35.284197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:42.053 [2024-07-23 06:27:35.284391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba210 (9): Bad file descriptor 00:31:42.053 [2024-07-23 06:27:35.285407] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:42.053 [2024-07-23 06:27:35.285433] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:42.053 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.313 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:42.313 06:27:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:43.252 06:27:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:44.186 [2024-07-23 06:27:37.338814] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:44.186 [2024-07-23 06:27:37.338837] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:44.186 [2024-07-23 06:27:37.338858] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:44.186 [2024-07-23 06:27:37.425157] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:44.186 06:27:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:44.446 [2024-07-23 06:27:37.650751] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:44.446 [2024-07-23 06:27:37.650795] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:44.446 [2024-07-23 06:27:37.650829] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:44.446 [2024-07-23 06:27:37.650849] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:44.446 [2024-07-23 06:27:37.650862] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:44.446 [2024-07-23 06:27:37.657023] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10fd760 was disconnected and freed. delete nvme_qpair. 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1866675 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1866675 ']' 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1866675 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1866675 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1866675' 00:31:45.386 killing process with pid 1866675 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1866675 00:31:45.386 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1866675 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:45.646 rmmod nvme_tcp 00:31:45.646 rmmod nvme_fabrics 00:31:45.646 rmmod nvme_keyring 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1866528 ']' 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1866528 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1866528 ']' 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1866528 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1866528 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1866528' 00:31:45.646 killing process with pid 1866528 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1866528 00:31:45.646 06:27:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1866528 00:31:45.904 06:27:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:45.904 06:27:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:45.904 06:27:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:45.904 06:27:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:45.904 06:27:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:45.904 06:27:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.904 06:27:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.904 06:27:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.820 06:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:48.085 00:31:48.085 real 0m17.593s 00:31:48.085 user 0m25.411s 00:31:48.085 sys 0m3.051s 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.085 ************************************ 00:31:48.085 END TEST nvmf_discovery_remove_ifc 00:31:48.085 ************************************ 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.085 ************************************ 00:31:48.085 START TEST nvmf_identify_kernel_target 00:31:48.085 ************************************ 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:48.085 * Looking for test storage... 00:31:48.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.085 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:48.086 06:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:49.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:49.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:49.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:49.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:49.987 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:49.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:31:49.988 00:31:49.988 --- 10.0.0.2 ping statistics --- 00:31:49.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.988 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:31:49.988 00:31:49.988 --- 10.0.0.1 ping statistics --- 00:31:49.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.988 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:49.988 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:50.246 06:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:51.184 Waiting for block devices as requested 00:31:51.184 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:51.184 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:51.443 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:51.443 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:51.443 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:51.702 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:51.702 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:51.702 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:51.702 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:51.960 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:51.960 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:51.960 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:52.219 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:52.219 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:52.219 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:52.219 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:52.505 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:52.505 No valid GPT data, bailing 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:52.506 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:52.767 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:52.767 00:31:52.767 Discovery Log Number of Records 2, Generation counter 2 00:31:52.767 =====Discovery Log Entry 0====== 00:31:52.767 trtype: tcp 00:31:52.767 adrfam: ipv4 00:31:52.767 subtype: current discovery subsystem 00:31:52.767 treq: not specified, sq flow control disable supported 00:31:52.767 portid: 1 00:31:52.767 trsvcid: 4420 00:31:52.767 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:52.767 traddr: 10.0.0.1 00:31:52.767 eflags: none 00:31:52.767 sectype: none 00:31:52.767 =====Discovery Log Entry 1====== 00:31:52.767 trtype: tcp 00:31:52.767 adrfam: ipv4 00:31:52.767 subtype: nvme subsystem 00:31:52.767 treq: not specified, sq flow control disable supported 00:31:52.767 portid: 1 00:31:52.767 trsvcid: 4420 00:31:52.767 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:52.767 traddr: 10.0.0.1 00:31:52.767 eflags: none 00:31:52.767 sectype: none 00:31:52.767 06:27:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:52.767 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:52.767 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.767 ===================================================== 00:31:52.767 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:52.767 ===================================================== 00:31:52.767 Controller Capabilities/Features 00:31:52.767 ================================ 00:31:52.767 Vendor ID: 0000 00:31:52.767 Subsystem Vendor ID: 0000 00:31:52.767 Serial Number: 96083234512f6d7642ff 00:31:52.767 Model Number: Linux 00:31:52.767 Firmware Version: 6.7.0-68 00:31:52.767 Recommended Arb Burst: 0 00:31:52.767 IEEE OUI Identifier: 00 00 00 00:31:52.767 Multi-path I/O 00:31:52.767 May have multiple subsystem ports: No 00:31:52.767 May have multiple controllers: No 00:31:52.767 Associated with SR-IOV VF: No 00:31:52.767 Max Data Transfer Size: Unlimited 00:31:52.767 Max Number of Namespaces: 0 00:31:52.767 Max Number of I/O Queues: 1024 00:31:52.767 NVMe Specification Version (VS): 1.3 00:31:52.767 NVMe Specification Version (Identify): 1.3 00:31:52.767 Maximum Queue Entries: 1024 00:31:52.767 Contiguous Queues Required: No 00:31:52.767 Arbitration Mechanisms Supported 00:31:52.767 Weighted Round Robin: Not Supported 00:31:52.767 Vendor Specific: Not Supported 00:31:52.767 Reset Timeout: 7500 ms 00:31:52.767 Doorbell Stride: 4 bytes 00:31:52.767 NVM Subsystem Reset: Not Supported 00:31:52.767 Command Sets Supported 00:31:52.767 NVM Command Set: Supported 00:31:52.767 Boot Partition: Not Supported 00:31:52.767 Memory Page Size Minimum: 4096 bytes 00:31:52.767 Memory Page Size Maximum: 4096 bytes 00:31:52.767 Persistent Memory Region: Not Supported 00:31:52.767 Optional Asynchronous Events Supported 00:31:52.767 Namespace Attribute Notices: Not Supported 00:31:52.767 Firmware Activation Notices: Not Supported 00:31:52.767 ANA Change Notices: Not Supported 00:31:52.767 PLE Aggregate Log Change Notices: Not Supported 00:31:52.767 LBA Status Info Alert Notices: Not Supported 00:31:52.767 EGE Aggregate Log Change Notices: Not Supported 00:31:52.767 Normal NVM Subsystem Shutdown event: Not Supported 00:31:52.767 Zone Descriptor Change Notices: Not Supported 00:31:52.767 Discovery Log Change Notices: Supported 00:31:52.767 Controller Attributes 00:31:52.767 128-bit Host Identifier: Not Supported 00:31:52.767 Non-Operational Permissive Mode: Not Supported 00:31:52.767 NVM Sets: Not Supported 00:31:52.767 Read Recovery Levels: Not Supported 00:31:52.767 Endurance Groups: Not Supported 00:31:52.767 Predictable Latency Mode: Not Supported 00:31:52.767 Traffic Based Keep ALive: Not Supported 00:31:52.767 Namespace Granularity: Not Supported 00:31:52.767 SQ Associations: Not Supported 00:31:52.767 UUID List: Not Supported 00:31:52.767 Multi-Domain Subsystem: Not Supported 00:31:52.767 Fixed Capacity Management: Not Supported 00:31:52.767 Variable Capacity Management: Not Supported 00:31:52.767 Delete Endurance Group: Not Supported 00:31:52.767 Delete NVM Set: Not Supported 00:31:52.767 Extended LBA Formats Supported: Not Supported 00:31:52.767 Flexible Data Placement Supported: Not Supported 00:31:52.767 00:31:52.767 Controller Memory Buffer Support 00:31:52.767 ================================ 00:31:52.767 Supported: No 00:31:52.767 00:31:52.767 Persistent Memory Region Support 00:31:52.767 ================================ 00:31:52.767 Supported: No 00:31:52.767 00:31:52.767 Admin Command Set Attributes 00:31:52.767 ============================ 00:31:52.767 Security Send/Receive: Not Supported 00:31:52.767 Format NVM: Not Supported 00:31:52.767 Firmware Activate/Download: Not Supported 00:31:52.767 Namespace Management: Not Supported 00:31:52.767 Device Self-Test: Not Supported 00:31:52.767 Directives: Not Supported 00:31:52.767 NVMe-MI: Not Supported 00:31:52.767 Virtualization Management: Not Supported 00:31:52.767 Doorbell Buffer Config: Not Supported 00:31:52.767 Get LBA Status Capability: Not Supported 00:31:52.767 Command & Feature Lockdown Capability: Not Supported 00:31:52.767 Abort Command Limit: 1 00:31:52.767 Async Event Request Limit: 1 00:31:52.767 Number of Firmware Slots: N/A 00:31:52.767 Firmware Slot 1 Read-Only: N/A 00:31:52.767 Firmware Activation Without Reset: N/A 00:31:52.767 Multiple Update Detection Support: N/A 00:31:52.767 Firmware Update Granularity: No Information Provided 00:31:52.767 Per-Namespace SMART Log: No 00:31:52.767 Asymmetric Namespace Access Log Page: Not Supported 00:31:52.767 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:52.767 Command Effects Log Page: Not Supported 00:31:52.767 Get Log Page Extended Data: Supported 00:31:52.767 Telemetry Log Pages: Not Supported 00:31:52.767 Persistent Event Log Pages: Not Supported 00:31:52.767 Supported Log Pages Log Page: May Support 00:31:52.767 Commands Supported & Effects Log Page: Not Supported 00:31:52.767 Feature Identifiers & Effects Log Page:May Support 00:31:52.767 NVMe-MI Commands & Effects Log Page: May Support 00:31:52.767 Data Area 4 for Telemetry Log: Not Supported 00:31:52.767 Error Log Page Entries Supported: 1 00:31:52.767 Keep Alive: Not Supported 00:31:52.767 00:31:52.767 NVM Command Set Attributes 00:31:52.767 ========================== 00:31:52.767 Submission Queue Entry Size 00:31:52.767 Max: 1 00:31:52.767 Min: 1 00:31:52.767 Completion Queue Entry Size 00:31:52.767 Max: 1 00:31:52.767 Min: 1 00:31:52.767 Number of Namespaces: 0 00:31:52.767 Compare Command: Not Supported 00:31:52.767 Write Uncorrectable Command: Not Supported 00:31:52.767 Dataset Management Command: Not Supported 00:31:52.767 Write Zeroes Command: Not Supported 00:31:52.767 Set Features Save Field: Not Supported 00:31:52.767 Reservations: Not Supported 00:31:52.767 Timestamp: Not Supported 00:31:52.767 Copy: Not Supported 00:31:52.767 Volatile Write Cache: Not Present 00:31:52.767 Atomic Write Unit (Normal): 1 00:31:52.767 Atomic Write Unit (PFail): 1 00:31:52.767 Atomic Compare & Write Unit: 1 00:31:52.767 Fused Compare & Write: Not Supported 00:31:52.767 Scatter-Gather List 00:31:52.767 SGL Command Set: Supported 00:31:52.767 SGL Keyed: Not Supported 00:31:52.767 SGL Bit Bucket Descriptor: Not Supported 00:31:52.768 SGL Metadata Pointer: Not Supported 00:31:52.768 Oversized SGL: Not Supported 00:31:52.768 SGL Metadata Address: Not Supported 00:31:52.768 SGL Offset: Supported 00:31:52.768 Transport SGL Data Block: Not Supported 00:31:52.768 Replay Protected Memory Block: Not Supported 00:31:52.768 00:31:52.768 Firmware Slot Information 00:31:52.768 ========================= 00:31:52.768 Active slot: 0 00:31:52.768 00:31:52.768 00:31:52.768 Error Log 00:31:52.768 ========= 00:31:52.768 00:31:52.768 Active Namespaces 00:31:52.768 ================= 00:31:52.768 Discovery Log Page 00:31:52.768 ================== 00:31:52.768 Generation Counter: 2 00:31:52.768 Number of Records: 2 00:31:52.768 Record Format: 0 00:31:52.768 00:31:52.768 Discovery Log Entry 0 00:31:52.768 ---------------------- 00:31:52.768 Transport Type: 3 (TCP) 00:31:52.768 Address Family: 1 (IPv4) 00:31:52.768 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:52.768 Entry Flags: 00:31:52.768 Duplicate Returned Information: 0 00:31:52.768 Explicit Persistent Connection Support for Discovery: 0 00:31:52.768 Transport Requirements: 00:31:52.768 Secure Channel: Not Specified 00:31:52.768 Port ID: 1 (0x0001) 00:31:52.768 Controller ID: 65535 (0xffff) 00:31:52.768 Admin Max SQ Size: 32 00:31:52.768 Transport Service Identifier: 4420 00:31:52.768 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:52.768 Transport Address: 10.0.0.1 00:31:52.768 Discovery Log Entry 1 00:31:52.768 ---------------------- 00:31:52.768 Transport Type: 3 (TCP) 00:31:52.768 Address Family: 1 (IPv4) 00:31:52.768 Subsystem Type: 2 (NVM Subsystem) 00:31:52.768 Entry Flags: 00:31:52.768 Duplicate Returned Information: 0 00:31:52.768 Explicit Persistent Connection Support for Discovery: 0 00:31:52.768 Transport Requirements: 00:31:52.768 Secure Channel: Not Specified 00:31:52.768 Port ID: 1 (0x0001) 00:31:52.768 Controller ID: 65535 (0xffff) 00:31:52.768 Admin Max SQ Size: 32 00:31:52.768 Transport Service Identifier: 4420 00:31:52.768 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:52.768 Transport Address: 10.0.0.1 00:31:52.768 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.768 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.768 get_feature(0x01) failed 00:31:52.768 get_feature(0x02) failed 00:31:52.768 get_feature(0x04) failed 00:31:52.768 ===================================================== 00:31:52.768 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:52.768 ===================================================== 00:31:52.768 Controller Capabilities/Features 00:31:52.768 ================================ 00:31:52.768 Vendor ID: 0000 00:31:52.768 Subsystem Vendor ID: 0000 00:31:52.768 Serial Number: 9b0e3d6efbf2302db632 00:31:52.768 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:52.768 Firmware Version: 6.7.0-68 00:31:52.768 Recommended Arb Burst: 6 00:31:52.768 IEEE OUI Identifier: 00 00 00 00:31:52.768 Multi-path I/O 00:31:52.768 May have multiple subsystem ports: Yes 00:31:52.768 May have multiple controllers: Yes 00:31:52.768 Associated with SR-IOV VF: No 00:31:52.768 Max Data Transfer Size: Unlimited 00:31:52.768 Max Number of Namespaces: 1024 00:31:52.768 Max Number of I/O Queues: 128 00:31:52.768 NVMe Specification Version (VS): 1.3 00:31:52.768 NVMe Specification Version (Identify): 1.3 00:31:52.768 Maximum Queue Entries: 1024 00:31:52.768 Contiguous Queues Required: No 00:31:52.768 Arbitration Mechanisms Supported 00:31:52.768 Weighted Round Robin: Not Supported 00:31:52.768 Vendor Specific: Not Supported 00:31:52.768 Reset Timeout: 7500 ms 00:31:52.768 Doorbell Stride: 4 bytes 00:31:52.768 NVM Subsystem Reset: Not Supported 00:31:52.768 Command Sets Supported 00:31:52.768 NVM Command Set: Supported 00:31:52.768 Boot Partition: Not Supported 00:31:52.768 Memory Page Size Minimum: 4096 bytes 00:31:52.768 Memory Page Size Maximum: 4096 bytes 00:31:52.768 Persistent Memory Region: Not Supported 00:31:52.768 Optional Asynchronous Events Supported 00:31:52.768 Namespace Attribute Notices: Supported 00:31:52.768 Firmware Activation Notices: Not Supported 00:31:52.768 ANA Change Notices: Supported 00:31:52.768 PLE Aggregate Log Change Notices: Not Supported 00:31:52.768 LBA Status Info Alert Notices: Not Supported 00:31:52.768 EGE Aggregate Log Change Notices: Not Supported 00:31:52.768 Normal NVM Subsystem Shutdown event: Not Supported 00:31:52.768 Zone Descriptor Change Notices: Not Supported 00:31:52.768 Discovery Log Change Notices: Not Supported 00:31:52.768 Controller Attributes 00:31:52.768 128-bit Host Identifier: Supported 00:31:52.768 Non-Operational Permissive Mode: Not Supported 00:31:52.768 NVM Sets: Not Supported 00:31:52.768 Read Recovery Levels: Not Supported 00:31:52.768 Endurance Groups: Not Supported 00:31:52.768 Predictable Latency Mode: Not Supported 00:31:52.768 Traffic Based Keep ALive: Supported 00:31:52.768 Namespace Granularity: Not Supported 00:31:52.768 SQ Associations: Not Supported 00:31:52.768 UUID List: Not Supported 00:31:52.768 Multi-Domain Subsystem: Not Supported 00:31:52.768 Fixed Capacity Management: Not Supported 00:31:52.768 Variable Capacity Management: Not Supported 00:31:52.768 Delete Endurance Group: Not Supported 00:31:52.768 Delete NVM Set: Not Supported 00:31:52.768 Extended LBA Formats Supported: Not Supported 00:31:52.768 Flexible Data Placement Supported: Not Supported 00:31:52.768 00:31:52.768 Controller Memory Buffer Support 00:31:52.768 ================================ 00:31:52.768 Supported: No 00:31:52.768 00:31:52.768 Persistent Memory Region Support 00:31:52.768 ================================ 00:31:52.768 Supported: No 00:31:52.768 00:31:52.768 Admin Command Set Attributes 00:31:52.768 ============================ 00:31:52.768 Security Send/Receive: Not Supported 00:31:52.768 Format NVM: Not Supported 00:31:52.768 Firmware Activate/Download: Not Supported 00:31:52.768 Namespace Management: Not Supported 00:31:52.768 Device Self-Test: Not Supported 00:31:52.768 Directives: Not Supported 00:31:52.768 NVMe-MI: Not Supported 00:31:52.768 Virtualization Management: Not Supported 00:31:52.768 Doorbell Buffer Config: Not Supported 00:31:52.768 Get LBA Status Capability: Not Supported 00:31:52.768 Command & Feature Lockdown Capability: Not Supported 00:31:52.768 Abort Command Limit: 4 00:31:52.768 Async Event Request Limit: 4 00:31:52.768 Number of Firmware Slots: N/A 00:31:52.768 Firmware Slot 1 Read-Only: N/A 00:31:52.768 Firmware Activation Without Reset: N/A 00:31:52.768 Multiple Update Detection Support: N/A 00:31:52.768 Firmware Update Granularity: No Information Provided 00:31:52.768 Per-Namespace SMART Log: Yes 00:31:52.768 Asymmetric Namespace Access Log Page: Supported 00:31:52.768 ANA Transition Time : 10 sec 00:31:52.768 00:31:52.768 Asymmetric Namespace Access Capabilities 00:31:52.768 ANA Optimized State : Supported 00:31:52.768 ANA Non-Optimized State : Supported 00:31:52.768 ANA Inaccessible State : Supported 00:31:52.768 ANA Persistent Loss State : Supported 00:31:52.768 ANA Change State : Supported 00:31:52.768 ANAGRPID is not changed : No 00:31:52.768 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:52.768 00:31:52.768 ANA Group Identifier Maximum : 128 00:31:52.768 Number of ANA Group Identifiers : 128 00:31:52.768 Max Number of Allowed Namespaces : 1024 00:31:52.768 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:52.768 Command Effects Log Page: Supported 00:31:52.768 Get Log Page Extended Data: Supported 00:31:52.768 Telemetry Log Pages: Not Supported 00:31:52.768 Persistent Event Log Pages: Not Supported 00:31:52.768 Supported Log Pages Log Page: May Support 00:31:52.768 Commands Supported & Effects Log Page: Not Supported 00:31:52.768 Feature Identifiers & Effects Log Page:May Support 00:31:52.768 NVMe-MI Commands & Effects Log Page: May Support 00:31:52.768 Data Area 4 for Telemetry Log: Not Supported 00:31:52.768 Error Log Page Entries Supported: 128 00:31:52.768 Keep Alive: Supported 00:31:52.768 Keep Alive Granularity: 1000 ms 00:31:52.768 00:31:52.768 NVM Command Set Attributes 00:31:52.768 ========================== 00:31:52.768 Submission Queue Entry Size 00:31:52.768 Max: 64 00:31:52.768 Min: 64 00:31:52.769 Completion Queue Entry Size 00:31:52.769 Max: 16 00:31:52.769 Min: 16 00:31:52.769 Number of Namespaces: 1024 00:31:52.769 Compare Command: Not Supported 00:31:52.769 Write Uncorrectable Command: Not Supported 00:31:52.769 Dataset Management Command: Supported 00:31:52.769 Write Zeroes Command: Supported 00:31:52.769 Set Features Save Field: Not Supported 00:31:52.769 Reservations: Not Supported 00:31:52.769 Timestamp: Not Supported 00:31:52.769 Copy: Not Supported 00:31:52.769 Volatile Write Cache: Present 00:31:52.769 Atomic Write Unit (Normal): 1 00:31:52.769 Atomic Write Unit (PFail): 1 00:31:52.769 Atomic Compare & Write Unit: 1 00:31:52.769 Fused Compare & Write: Not Supported 00:31:52.769 Scatter-Gather List 00:31:52.769 SGL Command Set: Supported 00:31:52.769 SGL Keyed: Not Supported 00:31:52.769 SGL Bit Bucket Descriptor: Not Supported 00:31:52.769 SGL Metadata Pointer: Not Supported 00:31:52.769 Oversized SGL: Not Supported 00:31:52.769 SGL Metadata Address: Not Supported 00:31:52.769 SGL Offset: Supported 00:31:52.769 Transport SGL Data Block: Not Supported 00:31:52.769 Replay Protected Memory Block: Not Supported 00:31:52.769 00:31:52.769 Firmware Slot Information 00:31:52.769 ========================= 00:31:52.769 Active slot: 0 00:31:52.769 00:31:52.769 Asymmetric Namespace Access 00:31:52.769 =========================== 00:31:52.769 Change Count : 0 00:31:52.769 Number of ANA Group Descriptors : 1 00:31:52.769 ANA Group Descriptor : 0 00:31:52.769 ANA Group ID : 1 00:31:52.769 Number of NSID Values : 1 00:31:52.769 Change Count : 0 00:31:52.769 ANA State : 1 00:31:52.769 Namespace Identifier : 1 00:31:52.769 00:31:52.769 Commands Supported and Effects 00:31:52.769 ============================== 00:31:52.769 Admin Commands 00:31:52.769 -------------- 00:31:52.769 Get Log Page (02h): Supported 00:31:52.769 Identify (06h): Supported 00:31:52.769 Abort (08h): Supported 00:31:52.769 Set Features (09h): Supported 00:31:52.769 Get Features (0Ah): Supported 00:31:52.769 Asynchronous Event Request (0Ch): Supported 00:31:52.769 Keep Alive (18h): Supported 00:31:52.769 I/O Commands 00:31:52.769 ------------ 00:31:52.769 Flush (00h): Supported 00:31:52.769 Write (01h): Supported LBA-Change 00:31:52.769 Read (02h): Supported 00:31:52.769 Write Zeroes (08h): Supported LBA-Change 00:31:52.769 Dataset Management (09h): Supported 00:31:52.769 00:31:52.769 Error Log 00:31:52.769 ========= 00:31:52.769 Entry: 0 00:31:52.769 Error Count: 0x3 00:31:52.769 Submission Queue Id: 0x0 00:31:52.769 Command Id: 0x5 00:31:52.769 Phase Bit: 0 00:31:52.769 Status Code: 0x2 00:31:52.769 Status Code Type: 0x0 00:31:52.769 Do Not Retry: 1 00:31:52.769 Error Location: 0x28 00:31:52.769 LBA: 0x0 00:31:52.769 Namespace: 0x0 00:31:52.769 Vendor Log Page: 0x0 00:31:52.769 ----------- 00:31:52.769 Entry: 1 00:31:52.769 Error Count: 0x2 00:31:52.769 Submission Queue Id: 0x0 00:31:52.769 Command Id: 0x5 00:31:52.769 Phase Bit: 0 00:31:52.769 Status Code: 0x2 00:31:52.769 Status Code Type: 0x0 00:31:52.769 Do Not Retry: 1 00:31:52.769 Error Location: 0x28 00:31:52.769 LBA: 0x0 00:31:52.769 Namespace: 0x0 00:31:52.769 Vendor Log Page: 0x0 00:31:52.769 ----------- 00:31:52.769 Entry: 2 00:31:52.769 Error Count: 0x1 00:31:52.769 Submission Queue Id: 0x0 00:31:52.769 Command Id: 0x4 00:31:52.769 Phase Bit: 0 00:31:52.769 Status Code: 0x2 00:31:52.769 Status Code Type: 0x0 00:31:52.769 Do Not Retry: 1 00:31:52.769 Error Location: 0x28 00:31:52.769 LBA: 0x0 00:31:52.769 Namespace: 0x0 00:31:52.769 Vendor Log Page: 0x0 00:31:52.769 00:31:52.769 Number of Queues 00:31:52.769 ================ 00:31:52.769 Number of I/O Submission Queues: 128 00:31:52.769 Number of I/O Completion Queues: 128 00:31:52.769 00:31:52.769 ZNS Specific Controller Data 00:31:52.769 ============================ 00:31:52.769 Zone Append Size Limit: 0 00:31:52.769 00:31:52.769 00:31:52.769 Active Namespaces 00:31:52.769 ================= 00:31:52.769 get_feature(0x05) failed 00:31:52.769 Namespace ID:1 00:31:52.769 Command Set Identifier: NVM (00h) 00:31:52.769 Deallocate: Supported 00:31:52.769 Deallocated/Unwritten Error: Not Supported 00:31:52.769 Deallocated Read Value: Unknown 00:31:52.769 Deallocate in Write Zeroes: Not Supported 00:31:52.769 Deallocated Guard Field: 0xFFFF 00:31:52.769 Flush: Supported 00:31:52.769 Reservation: Not Supported 00:31:52.769 Namespace Sharing Capabilities: Multiple Controllers 00:31:52.769 Size (in LBAs): 1953525168 (931GiB) 00:31:52.769 Capacity (in LBAs): 1953525168 (931GiB) 00:31:52.769 Utilization (in LBAs): 1953525168 (931GiB) 00:31:52.769 UUID: 7a90bc41-3373-487f-9ac4-0848be6715e9 00:31:52.769 Thin Provisioning: Not Supported 00:31:52.769 Per-NS Atomic Units: Yes 00:31:52.769 Atomic Boundary Size (Normal): 0 00:31:52.769 Atomic Boundary Size (PFail): 0 00:31:52.769 Atomic Boundary Offset: 0 00:31:52.769 NGUID/EUI64 Never Reused: No 00:31:52.769 ANA group ID: 1 00:31:52.769 Namespace Write Protected: No 00:31:52.769 Number of LBA Formats: 1 00:31:52.769 Current LBA Format: LBA Format #00 00:31:52.769 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:52.769 00:31:52.769 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:52.769 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:52.769 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:52.769 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:52.769 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:52.769 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:52.769 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:53.030 rmmod nvme_tcp 00:31:53.030 rmmod nvme_fabrics 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.030 06:27:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:54.937 06:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:56.316 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:56.316 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:56.316 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:56.316 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:56.316 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:56.316 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:56.316 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:56.316 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:56.316 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:56.316 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:56.316 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:56.316 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:56.316 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:56.316 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:56.316 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:56.316 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:57.256 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:57.256 00:31:57.256 real 0m9.291s 00:31:57.256 user 0m1.886s 00:31:57.256 sys 0m3.336s 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.256 ************************************ 00:31:57.256 END TEST nvmf_identify_kernel_target 00:31:57.256 ************************************ 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.256 ************************************ 00:31:57.256 START TEST nvmf_auth_host 00:31:57.256 ************************************ 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:57.256 * Looking for test storage... 00:31:57.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.256 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:57.516 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:57.517 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.517 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.517 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.517 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:57.517 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:57.517 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:57.517 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.423 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.423 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:59.423 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:59.423 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:59.423 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:59.424 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:59.424 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:59.424 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:59.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:59.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:31:59.424 00:31:59.424 --- 10.0.0.2 ping statistics --- 00:31:59.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.424 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:31:59.424 00:31:59.424 --- 10.0.0.1 ping statistics --- 00:31:59.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.424 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:59.424 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1873743 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1873743 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1873743 ']' 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:59.425 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.993 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:59.993 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:59.993 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:59.993 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:59.993 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.993 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=22e0790163f8aaf37f60dd66f06e0e74 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Twn 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 22e0790163f8aaf37f60dd66f06e0e74 0 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 22e0790163f8aaf37f60dd66f06e0e74 0 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=22e0790163f8aaf37f60dd66f06e0e74 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Twn 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Twn 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Twn 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b66bb9eaf5f080ff98d532dbfe8a1e601f2ac44f4f11e5579401a7448d4f24c 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Usa 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b66bb9eaf5f080ff98d532dbfe8a1e601f2ac44f4f11e5579401a7448d4f24c 3 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b66bb9eaf5f080ff98d532dbfe8a1e601f2ac44f4f11e5579401a7448d4f24c 3 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b66bb9eaf5f080ff98d532dbfe8a1e601f2ac44f4f11e5579401a7448d4f24c 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Usa 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Usa 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Usa 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2601fb3ba870c2283c0f0cd818fd1605e53451663f3a1023 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.i4Q 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2601fb3ba870c2283c0f0cd818fd1605e53451663f3a1023 0 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2601fb3ba870c2283c0f0cd818fd1605e53451663f3a1023 0 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2601fb3ba870c2283c0f0cd818fd1605e53451663f3a1023 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.i4Q 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.i4Q 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.i4Q 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5c51fc70deb358c8bc66e7dd41a6d4429877ed20d647fe64 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qqz 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5c51fc70deb358c8bc66e7dd41a6d4429877ed20d647fe64 2 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5c51fc70deb358c8bc66e7dd41a6d4429877ed20d647fe64 2 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5c51fc70deb358c8bc66e7dd41a6d4429877ed20d647fe64 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qqz 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qqz 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.qqz 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8c32c7f60f24f1ff67e2a550c4e04c6 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5Gj 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8c32c7f60f24f1ff67e2a550c4e04c6 1 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8c32c7f60f24f1ff67e2a550c4e04c6 1 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8c32c7f60f24f1ff67e2a550c4e04c6 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:59.994 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5Gj 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5Gj 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5Gj 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a6d663b83cef620ef90dac4fec929bb2 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AVM 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a6d663b83cef620ef90dac4fec929bb2 1 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a6d663b83cef620ef90dac4fec929bb2 1 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a6d663b83cef620ef90dac4fec929bb2 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AVM 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AVM 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.AVM 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e42e5c99f9b9fe5136e1e39a36aa8e587591d10f19a60be7 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7to 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e42e5c99f9b9fe5136e1e39a36aa8e587591d10f19a60be7 2 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e42e5c99f9b9fe5136e1e39a36aa8e587591d10f19a60be7 2 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e42e5c99f9b9fe5136e1e39a36aa8e587591d10f19a60be7 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7to 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7to 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7to 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b89083c31fd198c09747b290c3562f2 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pEO 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b89083c31fd198c09747b290c3562f2 0 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b89083c31fd198c09747b290c3562f2 0 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b89083c31fd198c09747b290c3562f2 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pEO 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pEO 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pEO 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ce2040e7fe3db6ecf8f72a90d762c31a288bcec386eb11cd721fecb28b7b37a8 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NH3 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ce2040e7fe3db6ecf8f72a90d762c31a288bcec386eb11cd721fecb28b7b37a8 3 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ce2040e7fe3db6ecf8f72a90d762c31a288bcec386eb11cd721fecb28b7b37a8 3 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ce2040e7fe3db6ecf8f72a90d762c31a288bcec386eb11cd721fecb28b7b37a8 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NH3 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NH3 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NH3 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1873743 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1873743 ']' 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:00.254 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Twn 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Usa ]] 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Usa 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.i4Q 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.qqz ]] 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qqz 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5Gj 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.513 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.AVM ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AVM 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7to 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pEO ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pEO 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NH3 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:00.771 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:01.708 Waiting for block devices as requested 00:32:01.708 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:01.708 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:01.966 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:01.966 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:01.966 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:02.226 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:02.226 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:02.226 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:02.226 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:02.486 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:02.486 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:02.486 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:02.745 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:02.745 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:02.745 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:02.745 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:03.004 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:03.263 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:03.523 No valid GPT data, bailing 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:03.523 00:32:03.523 Discovery Log Number of Records 2, Generation counter 2 00:32:03.523 =====Discovery Log Entry 0====== 00:32:03.523 trtype: tcp 00:32:03.523 adrfam: ipv4 00:32:03.523 subtype: current discovery subsystem 00:32:03.523 treq: not specified, sq flow control disable supported 00:32:03.523 portid: 1 00:32:03.523 trsvcid: 4420 00:32:03.523 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:03.523 traddr: 10.0.0.1 00:32:03.523 eflags: none 00:32:03.523 sectype: none 00:32:03.523 =====Discovery Log Entry 1====== 00:32:03.523 trtype: tcp 00:32:03.523 adrfam: ipv4 00:32:03.523 subtype: nvme subsystem 00:32:03.523 treq: not specified, sq flow control disable supported 00:32:03.523 portid: 1 00:32:03.523 trsvcid: 4420 00:32:03.523 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:03.523 traddr: 10.0.0.1 00:32:03.523 eflags: none 00:32:03.523 sectype: none 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.523 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.782 nvme0n1 00:32:03.782 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.782 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.782 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.782 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.782 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.782 06:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:03.782 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.783 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.043 nvme0n1 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.043 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.044 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.304 nvme0n1 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.304 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.305 nvme0n1 00:32:04.305 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.563 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:04.564 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.564 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.564 nvme0n1 00:32:04.564 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.564 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.564 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.564 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.564 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.564 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.822 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.823 06:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.823 nvme0n1 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.823 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.081 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.082 nvme0n1 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.082 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.341 nvme0n1 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.341 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.599 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.600 nvme0n1 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.600 06:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.860 nvme0n1 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.860 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.119 nvme0n1 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.119 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.383 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 nvme0n1 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.673 06:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.932 nvme0n1 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.932 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.191 nvme0n1 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.191 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.449 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.709 nvme0n1 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.709 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.710 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.710 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:07.710 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.710 06:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.970 nvme0n1 00:32:07.970 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.970 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.970 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.970 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.971 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.543 nvme0n1 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.543 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.113 nvme0n1 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.113 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.685 nvme0n1 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.685 06:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.685 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.254 nvme0n1 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:10.254 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.514 06:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.082 nvme0n1 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:11.082 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.083 06:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.017 nvme0n1 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.017 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.018 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.956 nvme0n1 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.956 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.216 06:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.154 nvme0n1 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.154 06:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.094 nvme0n1 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.094 06:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.032 nvme0n1 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.032 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.033 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.291 nvme0n1 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.291 nvme0n1 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.291 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.550 nvme0n1 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.550 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.809 06:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.809 nvme0n1 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.809 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.067 nvme0n1 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.067 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.068 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.326 nvme0n1 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.326 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.585 nvme0n1 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.585 06:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.844 nvme0n1 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.844 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.103 nvme0n1 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.103 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.361 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.362 nvme0n1 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.362 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.620 06:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.878 nvme0n1 00:32:18.878 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.878 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.878 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.878 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.878 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.878 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.879 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.137 nvme0n1 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.137 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.138 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.138 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.138 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.138 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.138 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.138 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.395 nvme0n1 00:32:19.395 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.395 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.395 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.395 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.395 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.653 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:19.654 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.654 06:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.913 nvme0n1 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.913 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.199 nvme0n1 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.199 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.459 06:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.025 nvme0n1 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.025 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.592 nvme0n1 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.592 06:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.159 nvme0n1 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.159 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 nvme0n1 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.739 06:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.305 nvme0n1 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:23.305 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.306 06:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.239 nvme0n1 00:32:24.239 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.239 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.239 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.239 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.239 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.239 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.498 06:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.433 nvme0n1 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.433 06:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 nvme0n1 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.367 06:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.301 nvme0n1 00:32:27.301 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.301 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.301 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.301 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.301 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.301 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.301 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.302 06:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.677 nvme0n1 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.677 nvme0n1 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:28.677 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.678 06:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.937 nvme0n1 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.937 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.196 nvme0n1 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.196 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.197 nvme0n1 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.197 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.455 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.456 nvme0n1 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.456 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.715 06:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.715 nvme0n1 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.715 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.974 nvme0n1 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.974 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.231 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.232 nvme0n1 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.232 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:30.489 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.490 nvme0n1 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.490 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.748 06:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.748 nvme0n1 00:32:30.748 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.748 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.748 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.748 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.749 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.749 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.007 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.008 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.266 nvme0n1 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.266 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.267 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.526 nvme0n1 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.526 06:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.784 nvme0n1 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.043 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.044 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.044 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.044 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:32.044 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.044 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.302 nvme0n1 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.302 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.303 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.561 nvme0n1 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.561 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.819 06:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.385 nvme0n1 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.385 06:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.002 nvme0n1 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.002 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.262 nvme0n1 00:32:34.262 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.262 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.262 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.262 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.262 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.262 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.522 06:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.093 nvme0n1 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.093 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.663 nvme0n1 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMDc5MDE2M2Y4YWFmMzdmNjBkZDY2ZjA2ZTBlNzSAGh+Y: 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI2NmJiOWVhZjVmMDgwZmY5OGQ1MzJkYmZlOGExZTYwMWYyYWM0NGY0ZjExZTU1Nzk0MDFhNzQ0OGQ0ZjI0Y+tTWXc=: 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.663 06:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.605 nvme0n1 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.605 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.606 06:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.544 nvme0n1 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhjMzJjN2Y2MGYyNGYxZmY2N2UyYTU1MGM0ZTA0YzbU8gKm: 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkNjYzYjgzY2VmNjIwZWY5MGRhYzRmZWM5MjliYjK+GCC/: 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.544 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.545 06:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.478 nvme0n1 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZTVjOTlmOWI5ZmU1MTM2ZTFlMzlhMzZhYThlNTg3NTkxZDEwZjE5YTYwYmU3ZJAOgQ==: 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmI4OTA4M2MzMWZkMTk4YzA5NzQ3YjI5MGMzNTYyZjLPbkPq: 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.478 06:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.416 nvme0n1 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyMDQwZTdmZTNkYjZlY2Y4ZjcyYTkwZDc2MmMzMWEyODhiY2VjMzg2ZWIxMWNkNzIxZmVjYjI4YjdiMzdhOETJ7ew=: 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.416 06:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.350 nvme0n1 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYwMWZiM2JhODcwYzIyODNjMGYwY2Q4MThmZDE2MDVlNTM0NTE2NjNmM2ExMDIzIh6WFA==: 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: ]] 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM1MWZjNzBkZWIzNThjOGJjNjZlN2RkNDFhNmQ0NDI5ODc3ZWQyMGQ2NDdmZTY0FU1aCg==: 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:40.350 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.351 request: 00:32:40.351 { 00:32:40.351 "name": "nvme0", 00:32:40.351 "trtype": "tcp", 00:32:40.351 "traddr": "10.0.0.1", 00:32:40.351 "adrfam": "ipv4", 00:32:40.351 "trsvcid": "4420", 00:32:40.351 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:40.351 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:40.351 "prchk_reftag": false, 00:32:40.351 "prchk_guard": false, 00:32:40.351 "hdgst": false, 00:32:40.351 "ddgst": false, 00:32:40.351 "method": "bdev_nvme_attach_controller", 00:32:40.351 "req_id": 1 00:32:40.351 } 00:32:40.351 Got JSON-RPC error response 00:32:40.351 response: 00:32:40.351 { 00:32:40.351 "code": -5, 00:32:40.351 "message": "Input/output error" 00:32:40.351 } 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.351 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.610 request: 00:32:40.610 { 00:32:40.610 "name": "nvme0", 00:32:40.610 "trtype": "tcp", 00:32:40.610 "traddr": "10.0.0.1", 00:32:40.610 "adrfam": "ipv4", 00:32:40.610 "trsvcid": "4420", 00:32:40.610 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:40.610 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:40.610 "prchk_reftag": false, 00:32:40.610 "prchk_guard": false, 00:32:40.610 "hdgst": false, 00:32:40.610 "ddgst": false, 00:32:40.610 "dhchap_key": "key2", 00:32:40.610 "method": "bdev_nvme_attach_controller", 00:32:40.610 "req_id": 1 00:32:40.610 } 00:32:40.610 Got JSON-RPC error response 00:32:40.610 response: 00:32:40.610 { 00:32:40.610 "code": -5, 00:32:40.610 "message": "Input/output error" 00:32:40.610 } 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.610 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.610 request: 00:32:40.610 { 00:32:40.610 "name": "nvme0", 00:32:40.610 "trtype": "tcp", 00:32:40.610 "traddr": "10.0.0.1", 00:32:40.610 "adrfam": "ipv4", 00:32:40.610 "trsvcid": "4420", 00:32:40.610 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:40.610 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:40.610 "prchk_reftag": false, 00:32:40.610 "prchk_guard": false, 00:32:40.610 "hdgst": false, 00:32:40.610 "ddgst": false, 00:32:40.610 "dhchap_key": "key1", 00:32:40.610 "dhchap_ctrlr_key": "ckey2", 00:32:40.611 "method": "bdev_nvme_attach_controller", 00:32:40.611 "req_id": 1 00:32:40.611 } 00:32:40.611 Got JSON-RPC error response 00:32:40.611 response: 00:32:40.611 { 00:32:40.611 "code": -5, 00:32:40.611 "message": "Input/output error" 00:32:40.611 } 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:40.611 rmmod nvme_tcp 00:32:40.611 rmmod nvme_fabrics 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1873743 ']' 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1873743 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1873743 ']' 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1873743 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:40.611 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1873743 00:32:40.870 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:40.870 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:40.870 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1873743' 00:32:40.870 killing process with pid 1873743 00:32:40.870 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1873743 00:32:40.870 06:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1873743 00:32:40.870 06:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:40.870 06:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:40.870 06:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:40.870 06:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:40.870 06:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:40.870 06:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.870 06:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.870 06:28:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:43.409 06:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:44.344 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:44.344 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:44.344 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:44.344 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:44.344 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:44.344 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:44.344 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:44.344 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:44.344 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:44.344 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:44.344 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:44.344 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:44.344 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:44.344 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:44.344 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:44.344 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:45.278 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:45.278 06:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Twn /tmp/spdk.key-null.i4Q /tmp/spdk.key-sha256.5Gj /tmp/spdk.key-sha384.7to /tmp/spdk.key-sha512.NH3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:45.278 06:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:46.660 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:46.660 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:46.660 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:46.660 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:46.660 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:46.660 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:46.660 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:46.660 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:46.660 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:46.660 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:46.660 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:46.660 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:46.660 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:46.660 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:46.660 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:46.660 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:46.660 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:46.660 00:32:46.660 real 0m49.245s 00:32:46.660 user 0m47.077s 00:32:46.660 sys 0m5.688s 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.660 ************************************ 00:32:46.660 END TEST nvmf_auth_host 00:32:46.660 ************************************ 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.660 ************************************ 00:32:46.660 START TEST nvmf_digest 00:32:46.660 ************************************ 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:46.660 * Looking for test storage... 00:32:46.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.660 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:46.661 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:48.565 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:48.565 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:48.565 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:48.565 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:48.565 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:48.565 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:48.566 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:48.566 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:48.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:48.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:48.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:48.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:32:48.825 00:32:48.825 --- 10.0.0.2 ping statistics --- 00:32:48.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.825 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:32:48.825 00:32:48.825 --- 10.0.0.1 ping statistics --- 00:32:48.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.825 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:48.825 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:48.825 ************************************ 00:32:48.825 START TEST nvmf_digest_clean 00:32:48.825 ************************************ 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1883190 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1883190 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1883190 ']' 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:48.825 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:48.825 [2024-07-23 06:28:42.080837] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:32:48.825 [2024-07-23 06:28:42.080938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.825 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.825 [2024-07-23 06:28:42.118973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:48.825 [2024-07-23 06:28:42.144812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.083 [2024-07-23 06:28:42.228111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:49.083 [2024-07-23 06:28:42.228181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:49.083 [2024-07-23 06:28:42.228205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:49.083 [2024-07-23 06:28:42.228216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:49.083 [2024-07-23 06:28:42.228226] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:49.083 [2024-07-23 06:28:42.228256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.083 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.083 null0 00:32:49.083 [2024-07-23 06:28:42.415241] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.344 [2024-07-23 06:28:42.439452] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1883210 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1883210 /var/tmp/bperf.sock 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1883210 ']' 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.344 [2024-07-23 06:28:42.485940] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:32:49.344 [2024-07-23 06:28:42.486017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883210 ] 00:32:49.344 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.344 [2024-07-23 06:28:42.517856] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:49.344 [2024-07-23 06:28:42.547541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.344 [2024-07-23 06:28:42.638342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:49.344 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:49.612 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:49.909 06:28:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.909 06:28:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.167 nvme0n1 00:32:50.167 06:28:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:50.167 06:28:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:50.427 Running I/O for 2 seconds... 00:32:52.334 00:32:52.334 Latency(us) 00:32:52.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.334 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:52.334 nvme0n1 : 2.01 19560.52 76.41 0.00 0.00 6536.17 2682.12 16990.81 00:32:52.334 =================================================================================================================== 00:32:52.334 Total : 19560.52 76.41 0.00 0.00 6536.17 2682.12 16990.81 00:32:52.334 0 00:32:52.334 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:52.334 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:52.334 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:52.334 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:52.334 | select(.opcode=="crc32c") 00:32:52.334 | "\(.module_name) \(.executed)"' 00:32:52.334 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1883210 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1883210 ']' 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1883210 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1883210 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1883210' 00:32:52.594 killing process with pid 1883210 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1883210 00:32:52.594 Received shutdown signal, test time was about 2.000000 seconds 00:32:52.594 00:32:52.594 Latency(us) 00:32:52.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.594 =================================================================================================================== 00:32:52.594 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:52.594 06:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1883210 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1883622 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1883622 /var/tmp/bperf.sock 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1883622 ']' 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:52.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:52.853 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:52.853 [2024-07-23 06:28:46.104086] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:32:52.854 [2024-07-23 06:28:46.104182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883622 ] 00:32:52.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:52.854 Zero copy mechanism will not be used. 00:32:52.854 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.854 [2024-07-23 06:28:46.135923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:52.854 [2024-07-23 06:28:46.163306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.112 [2024-07-23 06:28:46.248503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.112 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.112 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:53.112 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:53.112 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:53.112 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:53.370 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.370 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.941 nvme0n1 00:32:53.941 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:53.941 06:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:53.941 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:53.941 Zero copy mechanism will not be used. 00:32:53.941 Running I/O for 2 seconds... 00:32:55.845 00:32:55.845 Latency(us) 00:32:55.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.845 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:55.845 nvme0n1 : 2.01 2159.99 270.00 0.00 0.00 7401.54 6068.15 10291.58 00:32:55.845 =================================================================================================================== 00:32:55.845 Total : 2159.99 270.00 0.00 0.00 7401.54 6068.15 10291.58 00:32:55.845 0 00:32:55.845 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:55.845 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:55.845 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:55.845 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:55.845 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:55.845 | select(.opcode=="crc32c") 00:32:55.845 | "\(.module_name) \(.executed)"' 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1883622 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1883622 ']' 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1883622 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1883622 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1883622' 00:32:56.105 killing process with pid 1883622 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1883622 00:32:56.105 Received shutdown signal, test time was about 2.000000 seconds 00:32:56.105 00:32:56.105 Latency(us) 00:32:56.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.105 =================================================================================================================== 00:32:56.105 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:56.105 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1883622 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1884024 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1884024 /var/tmp/bperf.sock 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1884024 ']' 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:56.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:56.364 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:56.364 [2024-07-23 06:28:49.680493] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:32:56.364 [2024-07-23 06:28:49.680586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884024 ] 00:32:56.623 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.623 [2024-07-23 06:28:49.712337] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:56.623 [2024-07-23 06:28:49.743423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.623 [2024-07-23 06:28:49.831930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.623 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:56.623 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:56.623 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:56.623 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:56.623 06:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:57.194 06:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.194 06:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.194 nvme0n1 00:32:57.456 06:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:57.456 06:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:57.456 Running I/O for 2 seconds... 00:32:59.362 00:32:59.362 Latency(us) 00:32:59.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.362 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.362 nvme0n1 : 2.01 19054.61 74.43 0.00 0.00 6701.51 5606.97 14854.83 00:32:59.362 =================================================================================================================== 00:32:59.362 Total : 19054.61 74.43 0.00 0.00 6701.51 5606.97 14854.83 00:32:59.362 0 00:32:59.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:59.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:59.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:59.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:59.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:59.362 | select(.opcode=="crc32c") 00:32:59.362 | "\(.module_name) \(.executed)"' 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1884024 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1884024 ']' 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1884024 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884024 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884024' 00:32:59.621 killing process with pid 1884024 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1884024 00:32:59.621 Received shutdown signal, test time was about 2.000000 seconds 00:32:59.621 00:32:59.621 Latency(us) 00:32:59.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.621 =================================================================================================================== 00:32:59.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:59.621 06:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1884024 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1884433 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1884433 /var/tmp/bperf.sock 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1884433 ']' 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:59.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:59.879 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:59.879 [2024-07-23 06:28:53.203503] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:32:59.879 [2024-07-23 06:28:53.203597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884433 ] 00:32:59.879 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:59.879 Zero copy mechanism will not be used. 00:33:00.137 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.137 [2024-07-23 06:28:53.236011] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:00.137 [2024-07-23 06:28:53.263684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.137 [2024-07-23 06:28:53.351591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.137 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:00.137 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:00.137 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:00.137 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:00.137 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:00.704 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.704 06:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.964 nvme0n1 00:33:00.964 06:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:00.964 06:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:00.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:00.964 Zero copy mechanism will not be used. 00:33:00.964 Running I/O for 2 seconds... 00:33:02.865 00:33:02.865 Latency(us) 00:33:02.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.865 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:02.865 nvme0n1 : 2.01 1762.33 220.29 0.00 0.00 9055.29 3276.80 10679.94 00:33:02.865 =================================================================================================================== 00:33:02.866 Total : 1762.33 220.29 0.00 0.00 9055.29 3276.80 10679.94 00:33:02.866 0 00:33:02.866 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:02.866 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:02.866 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:02.866 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:02.866 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:02.866 | select(.opcode=="crc32c") 00:33:02.866 | "\(.module_name) \(.executed)"' 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1884433 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1884433 ']' 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1884433 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.125 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884433 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884433' 00:33:03.383 killing process with pid 1884433 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1884433 00:33:03.383 Received shutdown signal, test time was about 2.000000 seconds 00:33:03.383 00:33:03.383 Latency(us) 00:33:03.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.383 =================================================================================================================== 00:33:03.383 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1884433 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1883190 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1883190 ']' 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1883190 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.383 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1883190 00:33:03.642 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:03.642 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:03.642 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1883190' 00:33:03.642 killing process with pid 1883190 00:33:03.642 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1883190 00:33:03.642 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1883190 00:33:03.642 00:33:03.642 real 0m14.939s 00:33:03.642 user 0m29.022s 00:33:03.642 sys 0m4.096s 00:33:03.642 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.642 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.642 ************************************ 00:33:03.642 END TEST nvmf_digest_clean 00:33:03.642 ************************************ 00:33:03.900 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:03.900 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:03.900 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:03.900 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.900 06:28:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:03.900 ************************************ 00:33:03.900 START TEST nvmf_digest_error 00:33:03.900 ************************************ 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1884979 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1884979 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1884979 ']' 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:03.900 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:03.900 [2024-07-23 06:28:57.065691] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:03.900 [2024-07-23 06:28:57.065768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.900 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.900 [2024-07-23 06:28:57.104888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:03.900 [2024-07-23 06:28:57.133117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.900 [2024-07-23 06:28:57.215689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.900 [2024-07-23 06:28:57.215746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.900 [2024-07-23 06:28:57.215769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.900 [2024-07-23 06:28:57.215781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.900 [2024-07-23 06:28:57.215791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.900 [2024-07-23 06:28:57.215823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.159 [2024-07-23 06:28:57.300403] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.159 null0 00:33:04.159 [2024-07-23 06:28:57.415555] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.159 [2024-07-23 06:28:57.439784] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1885007 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1885007 /var/tmp/bperf.sock 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1885007 ']' 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:04.159 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.159 [2024-07-23 06:28:57.489736] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:04.159 [2024-07-23 06:28:57.489813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885007 ] 00:33:04.417 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.418 [2024-07-23 06:28:57.522434] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:04.418 [2024-07-23 06:28:57.550862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.418 [2024-07-23 06:28:57.636981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.418 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.418 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:04.418 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.418 06:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:05.002 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:05.002 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.002 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.002 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.002 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.002 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.293 nvme0n1 00:33:05.293 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:05.293 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.293 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.293 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.293 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:05.293 06:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.293 Running I/O for 2 seconds... 00:33:05.293 [2024-07-23 06:28:58.568358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.293 [2024-07-23 06:28:58.568416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.293 [2024-07-23 06:28:58.568434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.293 [2024-07-23 06:28:58.585230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.293 [2024-07-23 06:28:58.585269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.293 [2024-07-23 06:28:58.585290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.293 [2024-07-23 06:28:58.595898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.293 [2024-07-23 06:28:58.595950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.293 [2024-07-23 06:28:58.595972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.293 [2024-07-23 06:28:58.611056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.293 [2024-07-23 06:28:58.611093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.293 [2024-07-23 06:28:58.611113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.293 [2024-07-23 06:28:58.624383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.293 [2024-07-23 06:28:58.624419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.293 [2024-07-23 06:28:58.624438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.564 [2024-07-23 06:28:58.637473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.637510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.637530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.652032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.652068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.652093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.666375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.666411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.666436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.678167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.678203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.678222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.693184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.693224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.693245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.706211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.706246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.706265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.721479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.721515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.721534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.733048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.733084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.733103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.747230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.747266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.762849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.762881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.762899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.776270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.776305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.776325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.789931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.789967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.789987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.802995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.803042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.803071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.816780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.816810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.816826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.830390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.830426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.830446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.843396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.843431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.843451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.858202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.858238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.858258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.872061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.872096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.872116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.883931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.883977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.883997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.565 [2024-07-23 06:28:58.898054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.565 [2024-07-23 06:28:58.898089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.565 [2024-07-23 06:28:58.898108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:58.911896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:58.911947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:58.911967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:58.925327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:58.925372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:58.925396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:58.939517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:58.939553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:58.939572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:58.951134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:58.951169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:58.951189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:58.964879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:58.964912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:58.964946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:58.978719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:58.978755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:58.978775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:58.993515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:58.993550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:58.993577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.006172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.006208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.006228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.021974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.022005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.022022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.035090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.035120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.035145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.047811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.047844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.047872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.061223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.061255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.061271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.074363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.074398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.074417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.085358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.085392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.085411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.100381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.100411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.100428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.112367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.112398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.112415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.127424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.127459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.127479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.142453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.142488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.142507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.825 [2024-07-23 06:28:59.156730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:05.825 [2024-07-23 06:28:59.156769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.825 [2024-07-23 06:28:59.156787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.169105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.169140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.169169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.183706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.183737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.183753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.196521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.196555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.196576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.212207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.212242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.212261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.224306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.224341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.224360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.239039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.239074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.239094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.254584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.254636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.254681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.267461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.267495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.267515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.281878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.281923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.281939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.294825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.294855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.294872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.084 [2024-07-23 06:28:59.308594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.084 [2024-07-23 06:28:59.308645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.084 [2024-07-23 06:28:59.308681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.085 [2024-07-23 06:28:59.323078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.085 [2024-07-23 06:28:59.323113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.085 [2024-07-23 06:28:59.323143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.085 [2024-07-23 06:28:59.335705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.085 [2024-07-23 06:28:59.335735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.085 [2024-07-23 06:28:59.335763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.085 [2024-07-23 06:28:59.351924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.085 [2024-07-23 06:28:59.351960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.085 [2024-07-23 06:28:59.351980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.085 [2024-07-23 06:28:59.363377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.085 [2024-07-23 06:28:59.363412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.085 [2024-07-23 06:28:59.363432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.085 [2024-07-23 06:28:59.379594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.085 [2024-07-23 06:28:59.379643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.085 [2024-07-23 06:28:59.379679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.085 [2024-07-23 06:28:59.395432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.085 [2024-07-23 06:28:59.395466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.085 [2024-07-23 06:28:59.395495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.085 [2024-07-23 06:28:59.408509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.085 [2024-07-23 06:28:59.408543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.085 [2024-07-23 06:28:59.408562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.085 [2024-07-23 06:28:59.425575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.085 [2024-07-23 06:28:59.425610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.085 [2024-07-23 06:28:59.425664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.343 [2024-07-23 06:28:59.437361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.343 [2024-07-23 06:28:59.437395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.343 [2024-07-23 06:28:59.437420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.343 [2024-07-23 06:28:59.452589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.343 [2024-07-23 06:28:59.452630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.343 [2024-07-23 06:28:59.452671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.343 [2024-07-23 06:28:59.467634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.343 [2024-07-23 06:28:59.467681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.343 [2024-07-23 06:28:59.467701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.343 [2024-07-23 06:28:59.481604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.343 [2024-07-23 06:28:59.481667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.343 [2024-07-23 06:28:59.481685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.343 [2024-07-23 06:28:59.494164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.343 [2024-07-23 06:28:59.494199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.343 [2024-07-23 06:28:59.494219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.343 [2024-07-23 06:28:59.507426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.343 [2024-07-23 06:28:59.507460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.343 [2024-07-23 06:28:59.507480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.343 [2024-07-23 06:28:59.522016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.343 [2024-07-23 06:28:59.522058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.343 [2024-07-23 06:28:59.522078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.343 [2024-07-23 06:28:59.536843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.343 [2024-07-23 06:28:59.536874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.536893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.548559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.548594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.548631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.564107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.564141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.564161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.579787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.579819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.579840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.594021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.594055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.594075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.606655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.606704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.606721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.621452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.621487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.621510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.633768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.633796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.633814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.648460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.648490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.648522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.662730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.662775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.662792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.344 [2024-07-23 06:28:59.675485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.344 [2024-07-23 06:28:59.675519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.344 [2024-07-23 06:28:59.675539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.602 [2024-07-23 06:28:59.689719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.602 [2024-07-23 06:28:59.689750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.602 [2024-07-23 06:28:59.689772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.602 [2024-07-23 06:28:59.702496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.602 [2024-07-23 06:28:59.702541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.602 [2024-07-23 06:28:59.702562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.602 [2024-07-23 06:28:59.716759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.602 [2024-07-23 06:28:59.716804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.602 [2024-07-23 06:28:59.716826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.602 [2024-07-23 06:28:59.730733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.602 [2024-07-23 06:28:59.730785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.730803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.744042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.744076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.744096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.758123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.758160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.758178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.771105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.771141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.771160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.784478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.784512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.784532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.797696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.797727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.797747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.811429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.811463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.811483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.824900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.824948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.824968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.839477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.839528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.839548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.852563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.852598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.852627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.864881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.864936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.864963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.879350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.879385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.879404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.891262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.891295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.891315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.907083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.907117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.907137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.919313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.919352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.919374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.603 [2024-07-23 06:28:59.933380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.603 [2024-07-23 06:28:59.933415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.603 [2024-07-23 06:28:59.933434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:28:59.948396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:28:59.948435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:28:59.948459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:28:59.960824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:28:59.960855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:28:59.960879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:28:59.974863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:28:59.974893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:28:59.974913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:28:59.991418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:28:59.991452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:28:59.991480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:29:00.003485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:29:00.003529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:29:00.003555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:29:00.016864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:29:00.016934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:29:00.016953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:29:00.032684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:29:00.032720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:29:00.032747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:29:00.044233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:29:00.044264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:29:00.044287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:29:00.057795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:29:00.057825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:29:00.057842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:29:00.071179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:29:00.071214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:29:00.071234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.862 [2024-07-23 06:29:00.084974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.862 [2024-07-23 06:29:00.085024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.862 [2024-07-23 06:29:00.085044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.863 [2024-07-23 06:29:00.098467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.863 [2024-07-23 06:29:00.098508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.863 [2024-07-23 06:29:00.098529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.863 [2024-07-23 06:29:00.114597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.863 [2024-07-23 06:29:00.114681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.863 [2024-07-23 06:29:00.114700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.863 [2024-07-23 06:29:00.128972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.863 [2024-07-23 06:29:00.129017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.863 [2024-07-23 06:29:00.129033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.863 [2024-07-23 06:29:00.143992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.863 [2024-07-23 06:29:00.144044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.863 [2024-07-23 06:29:00.144061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.863 [2024-07-23 06:29:00.155111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.863 [2024-07-23 06:29:00.155146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.863 [2024-07-23 06:29:00.155165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.863 [2024-07-23 06:29:00.170899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.863 [2024-07-23 06:29:00.170948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.863 [2024-07-23 06:29:00.170968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.863 [2024-07-23 06:29:00.184769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.863 [2024-07-23 06:29:00.184802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.863 [2024-07-23 06:29:00.184819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.863 [2024-07-23 06:29:00.199318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:06.863 [2024-07-23 06:29:00.199366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.863 [2024-07-23 06:29:00.199383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.210729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.210758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.210775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.227049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.227095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.227114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.239135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.239190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.253311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.253345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.253364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.267749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.267779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.267796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.282315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.282350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.282369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.294041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.294075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.294094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.308421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.308456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.308476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.323491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.323526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.323546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.338216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.338250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.338270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.350580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.350622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.350666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.366292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.366328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.366349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.379277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.379307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.379324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.394052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.394088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.394108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.408829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.408859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.408876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.420798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.420829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.420846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.435588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.435632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.435656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.451274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.451310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.451329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.122 [2024-07-23 06:29:00.464485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.122 [2024-07-23 06:29:00.464521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.122 [2024-07-23 06:29:00.464541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.381 [2024-07-23 06:29:00.480439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.381 [2024-07-23 06:29:00.480475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.381 [2024-07-23 06:29:00.480494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.381 [2024-07-23 06:29:00.494980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.381 [2024-07-23 06:29:00.495015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.381 [2024-07-23 06:29:00.495035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.381 [2024-07-23 06:29:00.508093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.381 [2024-07-23 06:29:00.508129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.381 [2024-07-23 06:29:00.508149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.381 [2024-07-23 06:29:00.522102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.381 [2024-07-23 06:29:00.522142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.381 [2024-07-23 06:29:00.522162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.381 [2024-07-23 06:29:00.535905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.381 [2024-07-23 06:29:00.535951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.381 [2024-07-23 06:29:00.535968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.381 [2024-07-23 06:29:00.547538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137b320) 00:33:07.381 [2024-07-23 06:29:00.547578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.381 [2024-07-23 06:29:00.547598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.381 00:33:07.381 Latency(us) 00:33:07.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.381 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:07.381 nvme0n1 : 2.01 18365.80 71.74 0.00 0.00 6958.95 3058.35 18155.90 00:33:07.381 =================================================================================================================== 00:33:07.381 Total : 18365.80 71.74 0.00 0.00 6958.95 3058.35 18155.90 00:33:07.381 0 00:33:07.381 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:07.381 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:07.381 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:07.381 | .driver_specific 00:33:07.381 | .nvme_error 00:33:07.381 | .status_code 00:33:07.381 | .command_transient_transport_error' 00:33:07.381 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1885007 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1885007 ']' 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1885007 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1885007 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1885007' 00:33:07.641 killing process with pid 1885007 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1885007 00:33:07.641 Received shutdown signal, test time was about 2.000000 seconds 00:33:07.641 00:33:07.641 Latency(us) 00:33:07.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.641 =================================================================================================================== 00:33:07.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.641 06:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1885007 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1885414 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1885414 /var/tmp/bperf.sock 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1885414 ']' 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:07.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:07.901 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:07.901 [2024-07-23 06:29:01.120211] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:07.901 [2024-07-23 06:29:01.120307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885414 ] 00:33:07.901 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:07.901 Zero copy mechanism will not be used. 00:33:07.901 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.901 [2024-07-23 06:29:01.152736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:07.901 [2024-07-23 06:29:01.180591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.159 [2024-07-23 06:29:01.267117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.159 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:08.159 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:08.159 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:08.159 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:08.417 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:08.417 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.417 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.417 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.417 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.417 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.675 nvme0n1 00:33:08.675 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:08.675 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.675 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.675 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.675 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:08.675 06:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:08.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:08.935 Zero copy mechanism will not be used. 00:33:08.935 Running I/O for 2 seconds... 00:33:08.935 [2024-07-23 06:29:02.084984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.085046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.085065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.097761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.097794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.097813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.110040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.110071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.110088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.122854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.122884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.122901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.134327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.134357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.134374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.145911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.145957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.145974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.157372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.157418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.157435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.169096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.169142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.169160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.180591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.180628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.180671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.192145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.192174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.192191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.203702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.203747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.203773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.215313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.215344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.215361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.226792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.226822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.226839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.238311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.935 [2024-07-23 06:29:02.238340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.935 [2024-07-23 06:29:02.238357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.935 [2024-07-23 06:29:02.249897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.936 [2024-07-23 06:29:02.249942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.936 [2024-07-23 06:29:02.249959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.936 [2024-07-23 06:29:02.261566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.936 [2024-07-23 06:29:02.261626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.936 [2024-07-23 06:29:02.261670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.936 [2024-07-23 06:29:02.273309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:08.936 [2024-07-23 06:29:02.273355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.936 [2024-07-23 06:29:02.273373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.285284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.285329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.285345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.296888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.296933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.296949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.308582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.308640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.308673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.320976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.321025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.321042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.331659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.331705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.331722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.343524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.343568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.343584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.355401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.355446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.355462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.367244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.367291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.367309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.378765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.378810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.378828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.390989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.391018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.391034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.402735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.402781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.402798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.414634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.414668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.414701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.426358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.426392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.426412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.438033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.438067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.438086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.449798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.449830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.449848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.461478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.461513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.461532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.472978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.473012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.473031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.484586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.484630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.484651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.496404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.496438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.496457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.508201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.508245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.508271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.519919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.519948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.519965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.197 [2024-07-23 06:29:02.531631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.197 [2024-07-23 06:29:02.531678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.197 [2024-07-23 06:29:02.531696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.456 [2024-07-23 06:29:02.543927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.543962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.543982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.555623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.555657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.555691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.567334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.567368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.567388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.579084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.579117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.579136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.590721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.590751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.590768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.602483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.602518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.602537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.614314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.614354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.614374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.625919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.625948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.625965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.637596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.637638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.637658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.649579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.649620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.649642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.661255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.661288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.661307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.672858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.672903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.672920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.684417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.684450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.684469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.696329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.696362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.696381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.708514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.708547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.708567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.720041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.720075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.720094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.731673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.731703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.731721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.743422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.743455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.743475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.755421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.755455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.755475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.767475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.767508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.767526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.779401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.779434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.779453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.457 [2024-07-23 06:29:02.790973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.457 [2024-07-23 06:29:02.791005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.457 [2024-07-23 06:29:02.791024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.802826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.802855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.802873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.814743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.814780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.814798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.826590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.826630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.826651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.838260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.838293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.838313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.850017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.850051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.850070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.861610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.861663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.861696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.873591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.873634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.873655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.885362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.885396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.885415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.897043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.897077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.897095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.908744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.908773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.908790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.920422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.920455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.920474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.932187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.932221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.932240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.943859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.943907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.943924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.955836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.955865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.955882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.967634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.967667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.967699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.979374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.979406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.979426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:02.991010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:02.991044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:02.991063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:03.002690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:03.002734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:03.002751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:03.014502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:03.014536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:03.014561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:03.026355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:03.026388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:03.026407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:03.038167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:03.038201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:03.038220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.718 [2024-07-23 06:29:03.049749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.718 [2024-07-23 06:29:03.049779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.718 [2024-07-23 06:29:03.049795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.061629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.061677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.061696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.073559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.073594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.073621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.085349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.085383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.085402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.097180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.097215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.097233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.108865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.108896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.108912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.120488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.120528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.120547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.132170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.132205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.132227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.143539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.143573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.143591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.155257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.155291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.155311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.166827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.166857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.166875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.178555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.178589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.178609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.190515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.190549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.190568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.202388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.202422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.202441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.213990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.214039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.214058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.225734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.225763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.225780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.237350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.237384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.237403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.249151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.249185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.249204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.260824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.260855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.260873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.272658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.272688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.272704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.284706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.284735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.284752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.296479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.296508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.296523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.308011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.308054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.308070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.978 [2024-07-23 06:29:03.319861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:09.978 [2024-07-23 06:29:03.319891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.978 [2024-07-23 06:29:03.319913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.331691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.331736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.331753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.343518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.343561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.343577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.355310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.355343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.355362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.366929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.366958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.366974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.378442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.378474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.378493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.390337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.390370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.390390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.402585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.402625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.402646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.414251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.414283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.414302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.425753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.425796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.425813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.437470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.437503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.437522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.449358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.449391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.449410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.461094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.461126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.461145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.473178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.473211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.473229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.484816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.484846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.484864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.496554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.496587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.496606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.508402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.508435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.508454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.520436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.520469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.520494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.532083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.532116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.532136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.543657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.543704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.543721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.555476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.555510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.555529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.567294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.567328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.567347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.237 [2024-07-23 06:29:03.578829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.237 [2024-07-23 06:29:03.578860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.237 [2024-07-23 06:29:03.578877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.590553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.590601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.590629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.602534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.602568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.602587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.614557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.614591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.614611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.626330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.626376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.626398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.638088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.638121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.638140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.649948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.649976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.650008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.661588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.661630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.661652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.673642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.673689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.673706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.685543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.685588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.685607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.697601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.697642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.697676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.709386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.709432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.709451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.721292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.721325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.721344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.732838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.732867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.732884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.744529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.744562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.744582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.756337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.756370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.756389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.767914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.767961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.767980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.779463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.779496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.779516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.791174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.791207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.791226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.802873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.802901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.802919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.814605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.814656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.814700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.826409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.826444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.826473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.496 [2024-07-23 06:29:03.838277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.496 [2024-07-23 06:29:03.838312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.496 [2024-07-23 06:29:03.838332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.850046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.850081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.850100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.861981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.862015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.862035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.873848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.873878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.873912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.885720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.885750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.885768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.897585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.897623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.897645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.909293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.909326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.909350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.920960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.920993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.921012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.932710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.932744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.932762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.944569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.944621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.755 [2024-07-23 06:29:03.944642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.755 [2024-07-23 06:29:03.956346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.755 [2024-07-23 06:29:03.956379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:03.956397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:03.968363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:03.968398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:03.968423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:03.980138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:03.980172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:03.980191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:03.991905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:03.991937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:03.991953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:04.003734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:04.003763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:04.003781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:04.015714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:04.015744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:04.015763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:04.027494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:04.027528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:04.027555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:04.039044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:04.039077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:04.039095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:04.050639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:04.050686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:04.050705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:04.062268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:04.062301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:04.062320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.756 [2024-07-23 06:29:04.074053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2af00) 00:33:10.756 [2024-07-23 06:29:04.074087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.756 [2024-07-23 06:29:04.074106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.756 00:33:10.756 Latency(us) 00:33:10.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.756 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:10.756 nvme0n1 : 2.01 2635.03 329.38 0.00 0.00 6067.93 4878.79 12718.84 00:33:10.756 =================================================================================================================== 00:33:10.756 Total : 2635.03 329.38 0.00 0.00 6067.93 4878.79 12718.84 00:33:10.756 0 00:33:10.756 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:10.756 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:10.756 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:10.756 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:10.756 | .driver_specific 00:33:10.756 | .nvme_error 00:33:10.756 | .status_code 00:33:10.756 | .command_transient_transport_error' 00:33:11.014 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:33:11.014 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1885414 00:33:11.014 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1885414 ']' 00:33:11.014 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1885414 00:33:11.014 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:11.014 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:11.014 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1885414 00:33:11.272 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:11.272 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:11.272 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1885414' 00:33:11.272 killing process with pid 1885414 00:33:11.272 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1885414 00:33:11.272 Received shutdown signal, test time was about 2.000000 seconds 00:33:11.272 00:33:11.272 Latency(us) 00:33:11.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.272 =================================================================================================================== 00:33:11.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.272 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1885414 00:33:11.272 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1885822 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1885822 /var/tmp/bperf.sock 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1885822 ']' 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:11.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:11.273 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:11.532 [2024-07-23 06:29:04.634359] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:11.533 [2024-07-23 06:29:04.634451] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885822 ] 00:33:11.533 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.533 [2024-07-23 06:29:04.666423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:11.533 [2024-07-23 06:29:04.699070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.533 [2024-07-23 06:29:04.790689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.791 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:11.791 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:11.791 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:11.791 06:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:12.049 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:12.049 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.049 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.049 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.049 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:12.049 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:12.308 nvme0n1 00:33:12.308 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:12.308 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.308 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.308 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.308 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:12.308 06:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:12.566 Running I/O for 2 seconds... 00:33:12.566 [2024-07-23 06:29:05.761518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.566 [2024-07-23 06:29:05.761877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.566 [2024-07-23 06:29:05.761913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.566 [2024-07-23 06:29:05.775831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.566 [2024-07-23 06:29:05.776158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.566 [2024-07-23 06:29:05.776192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.566 [2024-07-23 06:29:05.790113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.566 [2024-07-23 06:29:05.790391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.566 [2024-07-23 06:29:05.790424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.566 [2024-07-23 06:29:05.804587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.566 [2024-07-23 06:29:05.804950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.566 [2024-07-23 06:29:05.804983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.566 [2024-07-23 06:29:05.818975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.566 [2024-07-23 06:29:05.819264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.566 [2024-07-23 06:29:05.819297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.566 [2024-07-23 06:29:05.833121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.566 [2024-07-23 06:29:05.833399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.566 [2024-07-23 06:29:05.833431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.566 [2024-07-23 06:29:05.847014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.566 [2024-07-23 06:29:05.847316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.566 [2024-07-23 06:29:05.847347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.566 [2024-07-23 06:29:05.860889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.566 [2024-07-23 06:29:05.861187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.566 [2024-07-23 06:29:05.861218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.566 [2024-07-23 06:29:05.874756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.567 [2024-07-23 06:29:05.875057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.567 [2024-07-23 06:29:05.875091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.567 [2024-07-23 06:29:05.888719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.567 [2024-07-23 06:29:05.889028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.567 [2024-07-23 06:29:05.889059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.567 [2024-07-23 06:29:05.902672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.567 [2024-07-23 06:29:05.902988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.567 [2024-07-23 06:29:05.903019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:05.916682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:05.917010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:05.917041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:05.930587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:05.930987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:05.931018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:05.944609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:05.944953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:05.944985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:05.958600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:05.958961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:05.958992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:05.972586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:05.972916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:05.972944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:05.986528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:05.986880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:05.986907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.000536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.000876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.000903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.014557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.014921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.014953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.028601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.028900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.028941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.042704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.042982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.043013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.056701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.057011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.057051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.070697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.070969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.071000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.084609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.084903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.084929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.098900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.099220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.099250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.112868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.113158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.113189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.126914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.127233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.127264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.140807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.141096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.141127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.154467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.154743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.154785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:12.826 [2024-07-23 06:29:06.168019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:12.826 [2024-07-23 06:29:06.168323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.826 [2024-07-23 06:29:06.168355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.182056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.182358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.182396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.196080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.196383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.196414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.209821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.210150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.210178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.223468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.223758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.223786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.237095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.237380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.237408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.250105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.250359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.250387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.263064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.263313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.263341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.275896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.276209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.276237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.288881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.289220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.289248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.301838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.302156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.302184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.315252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.315532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.315563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.329179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.329489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.329520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.343167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.343437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.343468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.357198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.357484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.357516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.371254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.371534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.371565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.385172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.385474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.385505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.399137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.399448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.399479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.413194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.413472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.413504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.085 [2024-07-23 06:29:06.427146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.085 [2024-07-23 06:29:06.427456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.085 [2024-07-23 06:29:06.427487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.345 [2024-07-23 06:29:06.441121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.441398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.441429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.455147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.455423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.455454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.469082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.469389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.469420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.483058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.483364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.483394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.497088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.497393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.497424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.511118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.511423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.511455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.525124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.525431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.525463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.539151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.539429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.539475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.553085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.553389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.553420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.567257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.567561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.567595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.581251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.581532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.581564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.595307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.595580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.595620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.609348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.609649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.609677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.623302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.623584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.623625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.637313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.637591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.637631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.651275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.651552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.651585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.665311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.665598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.665639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.346 [2024-07-23 06:29:06.679253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.346 [2024-07-23 06:29:06.679531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.346 [2024-07-23 06:29:06.679563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.605 [2024-07-23 06:29:06.693280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.693561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.693593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.707146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.707426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.707458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.721131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.721404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.721436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.735120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.735398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.735430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.749239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.749540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.749572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.763171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.763476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.763506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.777132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.777409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.777440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.791131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.791438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.791470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.805090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.805396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.805428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.819117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.819389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.819420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.833142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.833445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.833477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.847120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.847425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.847457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.861132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.861435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.861466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.875109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.875384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.875417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.889140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.889447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.889478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.903140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.903419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.903450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.917126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.917405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.917437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.931126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.931405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.931437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.606 [2024-07-23 06:29:06.945100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.606 [2024-07-23 06:29:06.945420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.606 [2024-07-23 06:29:06.945452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.866 [2024-07-23 06:29:06.959115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.866 [2024-07-23 06:29:06.959399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.866 [2024-07-23 06:29:06.959431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.866 [2024-07-23 06:29:06.973085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.866 [2024-07-23 06:29:06.973361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.866 [2024-07-23 06:29:06.973393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.866 [2024-07-23 06:29:06.987175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.866 [2024-07-23 06:29:06.987459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.866 [2024-07-23 06:29:06.987491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.000682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.000947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.000976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.014117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.014366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.014410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.027469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.027873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.027910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.041441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.041753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.041781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.055444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.055783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.055811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.069403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.069718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.069757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.083421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.083725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.083753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.097492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.097840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.097868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.111533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.111839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.111867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.125563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.125879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.125908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.139540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.139885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.139932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.153519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.153895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.153939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.167522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.167849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.167880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.181507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.181858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.181886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.195508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.195842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.195872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.867 [2024-07-23 06:29:07.209523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:13.867 [2024-07-23 06:29:07.209839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.867 [2024-07-23 06:29:07.209868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.223370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.223650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.223692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.237365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.237660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.237691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.251248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.251527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.251561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.265287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.265565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.265598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.279207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.279483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.279514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.293216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.293489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.293520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.307123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.307402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.307433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.321109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.321386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.321417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.335087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.335362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.335394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.348977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.349287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.349319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.362990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.363294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.363326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.377001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.377283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.377314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.390980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.391286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.391322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.405041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.405320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.405351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.419050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.419360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.419392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.433053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.127 [2024-07-23 06:29:07.433359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.127 [2024-07-23 06:29:07.433390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.127 [2024-07-23 06:29:07.447023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.128 [2024-07-23 06:29:07.447299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.128 [2024-07-23 06:29:07.447331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.128 [2024-07-23 06:29:07.461027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.128 [2024-07-23 06:29:07.461334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.128 [2024-07-23 06:29:07.461366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.475043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.475319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.475350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.489056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.489364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.489396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.503056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.503360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.503392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.517048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.517358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.517395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.530956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.531259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.531291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.544805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.545120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.545151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.558801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.559108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.559140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.572523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.572823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.572851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.585721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.586025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.586052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.599170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.599441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.599469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.612674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.612963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.612991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.625941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.626204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.626232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.639489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.639789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.639819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.652826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.653137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.653165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.666089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.666472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.666499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.679465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.679753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.679782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.692947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.693250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.693277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.706302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.706601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.706638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.389 [2024-07-23 06:29:07.719637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.389 [2024-07-23 06:29:07.719925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.389 [2024-07-23 06:29:07.719967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.649 [2024-07-23 06:29:07.733033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.649 [2024-07-23 06:29:07.733327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-07-23 06:29:07.733355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.649 [2024-07-23 06:29:07.746394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x165e940) with pdu=0x2000190f96f8 00:33:14.649 [2024-07-23 06:29:07.746673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-07-23 06:29:07.746703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.649 00:33:14.649 Latency(us) 00:33:14.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.649 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.649 nvme0n1 : 2.01 18337.82 71.63 0.00 0.00 6963.41 5534.15 15728.64 00:33:14.649 =================================================================================================================== 00:33:14.649 Total : 18337.82 71.63 0.00 0.00 6963.41 5534.15 15728.64 00:33:14.649 0 00:33:14.649 06:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:14.649 06:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:14.649 06:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:14.649 06:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:14.649 | .driver_specific 00:33:14.649 | .nvme_error 00:33:14.649 | .status_code 00:33:14.649 | .command_transient_transport_error' 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1885822 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1885822 ']' 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1885822 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1885822 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1885822' 00:33:14.912 killing process with pid 1885822 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1885822 00:33:14.912 Received shutdown signal, test time was about 2.000000 seconds 00:33:14.912 00:33:14.912 Latency(us) 00:33:14.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.912 =================================================================================================================== 00:33:14.912 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1885822 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1886346 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1886346 /var/tmp/bperf.sock 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1886346 ']' 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:14.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:14.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:15.172 [2024-07-23 06:29:08.297733] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:15.172 [2024-07-23 06:29:08.297826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886346 ] 00:33:15.172 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:15.172 Zero copy mechanism will not be used. 00:33:15.172 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.172 [2024-07-23 06:29:08.329302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:15.172 [2024-07-23 06:29:08.356796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.172 [2024-07-23 06:29:08.441166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.431 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:15.431 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:15.431 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:15.431 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:15.690 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:15.690 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.690 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:15.690 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.690 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.690 06:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.949 nvme0n1 00:33:15.949 06:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:15.949 06:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.949 06:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:15.949 06:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.949 06:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:15.949 06:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:16.209 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:16.209 Zero copy mechanism will not be used. 00:33:16.209 Running I/O for 2 seconds... 00:33:16.209 [2024-07-23 06:29:09.346273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.346728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.209 [2024-07-23 06:29:09.346777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.209 [2024-07-23 06:29:09.363837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.364245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.209 [2024-07-23 06:29:09.364281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.209 [2024-07-23 06:29:09.382711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.383112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.209 [2024-07-23 06:29:09.383147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.209 [2024-07-23 06:29:09.399475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.400049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.209 [2024-07-23 06:29:09.400080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.209 [2024-07-23 06:29:09.414736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.415140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.209 [2024-07-23 06:29:09.415186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.209 [2024-07-23 06:29:09.432491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.432866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.209 [2024-07-23 06:29:09.432898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.209 [2024-07-23 06:29:09.450217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.450783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.209 [2024-07-23 06:29:09.450827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.209 [2024-07-23 06:29:09.468908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.469362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.209 [2024-07-23 06:29:09.469390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.209 [2024-07-23 06:29:09.487052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.209 [2024-07-23 06:29:09.487525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.210 [2024-07-23 06:29:09.487555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.210 [2024-07-23 06:29:09.503338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.210 [2024-07-23 06:29:09.503844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.210 [2024-07-23 06:29:09.503875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.210 [2024-07-23 06:29:09.520891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.210 [2024-07-23 06:29:09.521247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.210 [2024-07-23 06:29:09.521276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.210 [2024-07-23 06:29:09.538532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.210 [2024-07-23 06:29:09.538853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.210 [2024-07-23 06:29:09.538885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.469 [2024-07-23 06:29:09.555004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.469 [2024-07-23 06:29:09.555352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.469 [2024-07-23 06:29:09.555382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.469 [2024-07-23 06:29:09.572249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.469 [2024-07-23 06:29:09.572647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.469 [2024-07-23 06:29:09.572679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.469 [2024-07-23 06:29:09.591668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.469 [2024-07-23 06:29:09.592153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.469 [2024-07-23 06:29:09.592197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.469 [2024-07-23 06:29:09.609118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.469 [2024-07-23 06:29:09.609641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.469 [2024-07-23 06:29:09.609686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.469 [2024-07-23 06:29:09.628197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.469 [2024-07-23 06:29:09.628595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.469 [2024-07-23 06:29:09.628636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.645201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.645608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.645645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.663568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.664058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.664103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.682906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.683359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.683388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.701258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.701654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.701683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.720229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.720685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.720714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.738571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.739087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.739117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.756241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.756670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.756716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.774410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.774843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.774872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.792592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.793013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.793042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.470 [2024-07-23 06:29:09.811173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.470 [2024-07-23 06:29:09.811716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.470 [2024-07-23 06:29:09.811759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.830986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.831417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.831445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.850142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.850556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.850584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.869697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.870121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.870167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.887748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.888126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.888169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.906508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.906965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.906996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.923838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.924313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.924341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.942422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.942804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.942847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.961312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.961826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.961869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.980722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.981132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.981176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:09.998990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:09.999436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:09.999480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:10.019315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:10.019786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:10.019826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.733 [2024-07-23 06:29:10.041137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.733 [2024-07-23 06:29:10.041735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.733 [2024-07-23 06:29:10.041771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.734 [2024-07-23 06:29:10.061120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:16.734 [2024-07-23 06:29:10.061495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.734 [2024-07-23 06:29:10.061525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.005 [2024-07-23 06:29:10.082592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.005 [2024-07-23 06:29:10.083032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.005 [2024-07-23 06:29:10.083063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.005 [2024-07-23 06:29:10.101515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.005 [2024-07-23 06:29:10.101960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.005 [2024-07-23 06:29:10.101991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.005 [2024-07-23 06:29:10.119649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.005 [2024-07-23 06:29:10.120055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.005 [2024-07-23 06:29:10.120110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.005 [2024-07-23 06:29:10.138976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.005 [2024-07-23 06:29:10.139269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.005 [2024-07-23 06:29:10.139298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.005 [2024-07-23 06:29:10.158677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.159098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.159141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.175908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.176345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.176388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.194281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.194702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.194732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.212435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.212871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.212902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.227832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.228204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.228232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.244090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.244508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.244536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.262197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.262640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.262694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.278158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.278518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.278547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.293223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.293843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.293873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.310212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.310607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.310658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.329837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.330215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.330243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.006 [2024-07-23 06:29:10.348360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.006 [2024-07-23 06:29:10.348846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.006 [2024-07-23 06:29:10.348877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.367011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.367445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.367474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.384366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.384743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.384788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.400813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.401214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.401244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.418766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.419177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.419228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.435913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.436303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.436346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.455021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.455504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.455546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.472912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.473289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.473332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.489603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.490004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.490046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.508742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.509197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.509241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.525688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.526101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.526128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.543127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.543558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.543604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.560943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.561368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.561410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.580349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.580717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.580744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.265 [2024-07-23 06:29:10.598732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.265 [2024-07-23 06:29:10.599154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.265 [2024-07-23 06:29:10.599181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.617209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.617553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.617581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.635388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.635808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.635836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.653165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.653697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.653726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.670322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.670681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.670709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.687488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.687948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.687976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.706689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.707118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.707161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.724840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.725211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.725254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.741170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.741718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.741762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.758843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.759259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.759300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.777866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.778234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.778263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.792795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.793184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.793212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.808810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.809160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.809189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.827106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.827443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.827471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.845105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.845503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.845531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.524 [2024-07-23 06:29:10.862102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.524 [2024-07-23 06:29:10.862555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.524 [2024-07-23 06:29:10.862598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.784 [2024-07-23 06:29:10.880856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.784 [2024-07-23 06:29:10.881223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.784 [2024-07-23 06:29:10.881273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.784 [2024-07-23 06:29:10.899190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.784 [2024-07-23 06:29:10.899568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.784 [2024-07-23 06:29:10.899608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.784 [2024-07-23 06:29:10.916974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.784 [2024-07-23 06:29:10.917423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.784 [2024-07-23 06:29:10.917465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.784 [2024-07-23 06:29:10.935785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.784 [2024-07-23 06:29:10.936255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.784 [2024-07-23 06:29:10.936282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.784 [2024-07-23 06:29:10.953851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.784 [2024-07-23 06:29:10.954217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.784 [2024-07-23 06:29:10.954259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.784 [2024-07-23 06:29:10.969657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.784 [2024-07-23 06:29:10.970030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.784 [2024-07-23 06:29:10.970070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.784 [2024-07-23 06:29:10.987515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.784 [2024-07-23 06:29:10.987908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.785 [2024-07-23 06:29:10.987954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.785 [2024-07-23 06:29:11.006798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.785 [2024-07-23 06:29:11.007169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.785 [2024-07-23 06:29:11.007213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.785 [2024-07-23 06:29:11.025776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.785 [2024-07-23 06:29:11.026142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.785 [2024-07-23 06:29:11.026184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.785 [2024-07-23 06:29:11.043262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.785 [2024-07-23 06:29:11.043684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.785 [2024-07-23 06:29:11.043712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.785 [2024-07-23 06:29:11.058767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.785 [2024-07-23 06:29:11.059138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.785 [2024-07-23 06:29:11.059166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.785 [2024-07-23 06:29:11.076234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.785 [2024-07-23 06:29:11.076663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.785 [2024-07-23 06:29:11.076712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.785 [2024-07-23 06:29:11.093657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.785 [2024-07-23 06:29:11.094021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.785 [2024-07-23 06:29:11.094062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.785 [2024-07-23 06:29:11.111494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:17.785 [2024-07-23 06:29:11.111920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.785 [2024-07-23 06:29:11.111948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.129913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.130297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.130326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.147336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.147644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.147677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.165539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.165914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.165957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.182686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.183040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.183083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.199808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.200158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.200200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.218464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.218885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.218928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.237510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.237908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.237956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.257517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.258049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.258095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.277679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.278079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.278122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.296589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.296987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.297031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.044 [2024-07-23 06:29:11.314711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16605c0) with pdu=0x2000190fef90 00:33:18.044 [2024-07-23 06:29:11.315180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.044 [2024-07-23 06:29:11.315209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.044 00:33:18.044 Latency(us) 00:33:18.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.044 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:18.044 nvme0n1 : 2.01 1712.71 214.09 0.00 0.00 9316.45 6747.78 21456.97 00:33:18.044 =================================================================================================================== 00:33:18.044 Total : 1712.71 214.09 0.00 0.00 9316.45 6747.78 21456.97 00:33:18.044 0 00:33:18.044 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:18.044 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:18.044 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:18.044 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:18.044 | .driver_specific 00:33:18.044 | .nvme_error 00:33:18.044 | .status_code 00:33:18.044 | .command_transient_transport_error' 00:33:18.318 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1886346 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1886346 ']' 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1886346 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1886346 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1886346' 00:33:18.319 killing process with pid 1886346 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1886346 00:33:18.319 Received shutdown signal, test time was about 2.000000 seconds 00:33:18.319 00:33:18.319 Latency(us) 00:33:18.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.319 =================================================================================================================== 00:33:18.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.319 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1886346 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1884979 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1884979 ']' 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1884979 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884979 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884979' 00:33:18.587 killing process with pid 1884979 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1884979 00:33:18.587 06:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1884979 00:33:18.847 00:33:18.847 real 0m15.120s 00:33:18.847 user 0m30.288s 00:33:18.847 sys 0m3.960s 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:18.847 ************************************ 00:33:18.847 END TEST nvmf_digest_error 00:33:18.847 ************************************ 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:18.847 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:18.847 rmmod nvme_tcp 00:33:18.847 rmmod nvme_fabrics 00:33:19.108 rmmod nvme_keyring 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1884979 ']' 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1884979 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1884979 ']' 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1884979 00:33:19.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1884979) - No such process 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1884979 is not found' 00:33:19.108 Process with pid 1884979 is not found 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.108 06:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:21.015 00:33:21.015 real 0m34.434s 00:33:21.015 user 1m0.133s 00:33:21.015 sys 0m9.594s 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:21.015 ************************************ 00:33:21.015 END TEST nvmf_digest 00:33:21.015 ************************************ 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.015 ************************************ 00:33:21.015 START TEST nvmf_bdevperf 00:33:21.015 ************************************ 00:33:21.015 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:21.274 * Looking for test storage... 00:33:21.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:21.274 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:21.275 06:29:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:23.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:23.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:23.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:23.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.182 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:23.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:33:23.183 00:33:23.183 --- 10.0.0.2 ping statistics --- 00:33:23.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.183 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:33:23.183 00:33:23.183 --- 10.0.0.1 ping statistics --- 00:33:23.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.183 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1888688 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1888688 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1888688 ']' 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:23.183 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.183 [2024-07-23 06:29:16.404440] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:23.183 [2024-07-23 06:29:16.404520] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.183 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.183 [2024-07-23 06:29:16.450932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:23.183 [2024-07-23 06:29:16.483042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:23.442 [2024-07-23 06:29:16.577158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.442 [2024-07-23 06:29:16.577209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.442 [2024-07-23 06:29:16.577242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.442 [2024-07-23 06:29:16.577253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.442 [2024-07-23 06:29:16.577262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.442 [2024-07-23 06:29:16.577344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.442 [2024-07-23 06:29:16.577652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.442 [2024-07-23 06:29:16.577655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.442 [2024-07-23 06:29:16.704125] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.442 Malloc0 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.442 [2024-07-23 06:29:16.769078] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:23.442 { 00:33:23.442 "params": { 00:33:23.442 "name": "Nvme$subsystem", 00:33:23.442 "trtype": "$TEST_TRANSPORT", 00:33:23.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.442 "adrfam": "ipv4", 00:33:23.442 "trsvcid": "$NVMF_PORT", 00:33:23.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.442 "hdgst": ${hdgst:-false}, 00:33:23.442 "ddgst": ${ddgst:-false} 00:33:23.442 }, 00:33:23.442 "method": "bdev_nvme_attach_controller" 00:33:23.442 } 00:33:23.442 EOF 00:33:23.442 )") 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:23.442 06:29:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:23.442 "params": { 00:33:23.442 "name": "Nvme1", 00:33:23.442 "trtype": "tcp", 00:33:23.442 "traddr": "10.0.0.2", 00:33:23.442 "adrfam": "ipv4", 00:33:23.442 "trsvcid": "4420", 00:33:23.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.442 "hdgst": false, 00:33:23.442 "ddgst": false 00:33:23.442 }, 00:33:23.442 "method": "bdev_nvme_attach_controller" 00:33:23.442 }' 00:33:23.701 [2024-07-23 06:29:16.814515] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:23.701 [2024-07-23 06:29:16.814587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888721 ] 00:33:23.701 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.701 [2024-07-23 06:29:16.847058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:23.701 [2024-07-23 06:29:16.875226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.701 [2024-07-23 06:29:16.960372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.959 Running I/O for 1 seconds... 00:33:24.899 00:33:24.899 Latency(us) 00:33:24.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.899 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:24.899 Verification LBA range: start 0x0 length 0x4000 00:33:24.899 Nvme1n1 : 1.01 8713.68 34.04 0.00 0.00 14627.78 2985.53 15243.19 00:33:24.899 =================================================================================================================== 00:33:24.899 Total : 8713.68 34.04 0.00 0.00 14627.78 2985.53 15243.19 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1888908 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:25.161 { 00:33:25.161 "params": { 00:33:25.161 "name": "Nvme$subsystem", 00:33:25.161 "trtype": "$TEST_TRANSPORT", 00:33:25.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.161 "adrfam": "ipv4", 00:33:25.161 "trsvcid": "$NVMF_PORT", 00:33:25.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.161 "hdgst": ${hdgst:-false}, 00:33:25.161 "ddgst": ${ddgst:-false} 00:33:25.161 }, 00:33:25.161 "method": "bdev_nvme_attach_controller" 00:33:25.161 } 00:33:25.161 EOF 00:33:25.161 )") 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:25.161 06:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:25.161 "params": { 00:33:25.161 "name": "Nvme1", 00:33:25.161 "trtype": "tcp", 00:33:25.161 "traddr": "10.0.0.2", 00:33:25.161 "adrfam": "ipv4", 00:33:25.161 "trsvcid": "4420", 00:33:25.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:25.161 "hdgst": false, 00:33:25.161 "ddgst": false 00:33:25.161 }, 00:33:25.161 "method": "bdev_nvme_attach_controller" 00:33:25.161 }' 00:33:25.161 [2024-07-23 06:29:18.417036] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:25.161 [2024-07-23 06:29:18.417117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888908 ] 00:33:25.161 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.161 [2024-07-23 06:29:18.450906] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:25.161 [2024-07-23 06:29:18.479569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.420 [2024-07-23 06:29:18.568143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.679 Running I/O for 15 seconds... 00:33:28.218 06:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1888688 00:33:28.218 06:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:28.218 [2024-07-23 06:29:21.386391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.386977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.386993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.218 [2024-07-23 06:29:21.387553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.218 [2024-07-23 06:29:21.387570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.387982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.387999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.219 [2024-07-23 06:29:21.388859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.219 [2024-07-23 06:29:21.388874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.388887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.388921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.388936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.388953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.388972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.388989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.220 [2024-07-23 06:29:21.389068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.220 [2024-07-23 06:29:21.389100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.220 [2024-07-23 06:29:21.389132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.389973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.389989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.390004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.390020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.390036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.390053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.390068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.390085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.390124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.390140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.390156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.390172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.390189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.220 [2024-07-23 06:29:21.390204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.220 [2024-07-23 06:29:21.390221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.221 [2024-07-23 06:29:21.390713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167bea0 is same with the state(5) to be set 00:33:28.221 [2024-07-23 06:29:21.390744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.221 [2024-07-23 06:29:21.390756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.221 [2024-07-23 06:29:21.390767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46248 len:8 PRP1 0x0 PRP2 0x0 00:33:28.221 [2024-07-23 06:29:21.390780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.221 [2024-07-23 06:29:21.390839] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x167bea0 was disconnected and freed. reset controller. 00:33:28.221 [2024-07-23 06:29:21.394730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.221 [2024-07-23 06:29:21.394797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.221 [2024-07-23 06:29:21.395563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.221 [2024-07-23 06:29:21.395595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.221 [2024-07-23 06:29:21.395627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.221 [2024-07-23 06:29:21.395863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.221 [2024-07-23 06:29:21.396129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.221 [2024-07-23 06:29:21.396153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.221 [2024-07-23 06:29:21.396169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.221 [2024-07-23 06:29:21.399761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.221 [2024-07-23 06:29:21.409055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.221 [2024-07-23 06:29:21.409507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.221 [2024-07-23 06:29:21.409538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.221 [2024-07-23 06:29:21.409557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.221 [2024-07-23 06:29:21.409806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.221 [2024-07-23 06:29:21.410049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.221 [2024-07-23 06:29:21.410073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.221 [2024-07-23 06:29:21.410088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.221 [2024-07-23 06:29:21.413673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.221 [2024-07-23 06:29:21.422993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.221 [2024-07-23 06:29:21.423537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.221 [2024-07-23 06:29:21.423585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.221 [2024-07-23 06:29:21.423603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.221 [2024-07-23 06:29:21.423855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.221 [2024-07-23 06:29:21.424099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.221 [2024-07-23 06:29:21.424122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.221 [2024-07-23 06:29:21.424137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.221 [2024-07-23 06:29:21.427731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.221 [2024-07-23 06:29:21.437027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.221 [2024-07-23 06:29:21.437490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.221 [2024-07-23 06:29:21.437521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.221 [2024-07-23 06:29:21.437538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.221 [2024-07-23 06:29:21.437790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.221 [2024-07-23 06:29:21.438034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.221 [2024-07-23 06:29:21.438057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.221 [2024-07-23 06:29:21.438072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.221 [2024-07-23 06:29:21.441655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.221 [2024-07-23 06:29:21.450946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.221 [2024-07-23 06:29:21.451402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.221 [2024-07-23 06:29:21.451432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.221 [2024-07-23 06:29:21.451450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.221 [2024-07-23 06:29:21.451707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.221 [2024-07-23 06:29:21.451951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.221 [2024-07-23 06:29:21.451975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.221 [2024-07-23 06:29:21.451990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.221 [2024-07-23 06:29:21.455562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.221 [2024-07-23 06:29:21.464859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.221 [2024-07-23 06:29:21.465332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.221 [2024-07-23 06:29:21.465358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.221 [2024-07-23 06:29:21.465388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.221 [2024-07-23 06:29:21.465654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.222 [2024-07-23 06:29:21.465898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.222 [2024-07-23 06:29:21.465922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.222 [2024-07-23 06:29:21.465937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.222 [2024-07-23 06:29:21.469509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.222 [2024-07-23 06:29:21.478806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.222 [2024-07-23 06:29:21.479277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.222 [2024-07-23 06:29:21.479307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.222 [2024-07-23 06:29:21.479325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.222 [2024-07-23 06:29:21.479563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.222 [2024-07-23 06:29:21.479816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.222 [2024-07-23 06:29:21.479841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.222 [2024-07-23 06:29:21.479856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.222 [2024-07-23 06:29:21.483430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.222 [2024-07-23 06:29:21.492725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.222 [2024-07-23 06:29:21.493192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.222 [2024-07-23 06:29:21.493223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.222 [2024-07-23 06:29:21.493241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.222 [2024-07-23 06:29:21.493480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.222 [2024-07-23 06:29:21.493734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.222 [2024-07-23 06:29:21.493758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.222 [2024-07-23 06:29:21.493778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.222 [2024-07-23 06:29:21.497353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.222 [2024-07-23 06:29:21.506662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.222 [2024-07-23 06:29:21.507114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.222 [2024-07-23 06:29:21.507145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.222 [2024-07-23 06:29:21.507163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.222 [2024-07-23 06:29:21.507402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.222 [2024-07-23 06:29:21.507657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.222 [2024-07-23 06:29:21.507681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.222 [2024-07-23 06:29:21.507696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.222 [2024-07-23 06:29:21.511273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.222 [2024-07-23 06:29:21.520566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.222 [2024-07-23 06:29:21.521040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.222 [2024-07-23 06:29:21.521071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.222 [2024-07-23 06:29:21.521089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.222 [2024-07-23 06:29:21.521327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.222 [2024-07-23 06:29:21.521570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.222 [2024-07-23 06:29:21.521594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.222 [2024-07-23 06:29:21.521609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.222 [2024-07-23 06:29:21.525203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.222 [2024-07-23 06:29:21.534497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.222 [2024-07-23 06:29:21.534933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.222 [2024-07-23 06:29:21.534963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.222 [2024-07-23 06:29:21.534981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.222 [2024-07-23 06:29:21.535219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.222 [2024-07-23 06:29:21.535462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.222 [2024-07-23 06:29:21.535486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.222 [2024-07-23 06:29:21.535501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.222 [2024-07-23 06:29:21.539084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.222 [2024-07-23 06:29:21.548429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.222 [2024-07-23 06:29:21.548888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.222 [2024-07-23 06:29:21.548925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.222 [2024-07-23 06:29:21.548945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.222 [2024-07-23 06:29:21.549184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.222 [2024-07-23 06:29:21.549428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.222 [2024-07-23 06:29:21.549451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.222 [2024-07-23 06:29:21.549466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.222 [2024-07-23 06:29:21.553053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.562367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.562833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.562864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.562882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.563121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.563364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.563387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.563402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.566987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.576281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.576744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.576775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.576793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.577032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.577275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.577298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.577313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.580900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.590200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.590637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.590680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.590697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.590955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.591204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.591228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.591243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.594829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.604114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.604682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.604714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.604732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.604971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.605214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.605237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.605252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.608860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.618147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.618603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.618642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.618661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.618899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.619142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.619165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.619180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.622764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.632057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.632511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.632542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.632559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.632810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.633054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.633077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.633092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.636680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.645980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.646427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.646458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.646476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.646724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.646968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.646991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.647006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.650579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.659880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.660336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.660366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.660384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.660633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.660876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.660900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.660914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.664489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.673804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.674260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.674291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.674309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.674547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.674802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.674825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.483 [2024-07-23 06:29:21.674840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.483 [2024-07-23 06:29:21.678415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.483 [2024-07-23 06:29:21.687707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.483 [2024-07-23 06:29:21.688141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.483 [2024-07-23 06:29:21.688172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.483 [2024-07-23 06:29:21.688196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.483 [2024-07-23 06:29:21.688436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.483 [2024-07-23 06:29:21.688690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.483 [2024-07-23 06:29:21.688714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.688729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.692303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.701592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.702059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.702086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.702101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.702353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.702596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.702630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.702656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.706238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.715547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.715986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.716017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.716035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.716274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.716517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.716540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.716555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.720139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.729443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.729929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.729970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.729992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.730254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.730498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.730526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.730542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.734136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.743458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.743898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.743936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.743953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.744191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.744433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.744456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.744471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.748058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.757273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.757719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.757748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.757765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.758007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.758213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.758233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.758245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.761842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.771179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.771640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.771684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.771700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.771930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.772185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.772209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.772224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.775798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.784978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.785489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.785520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.785538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.785797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.786040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.786063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.786078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.789529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.798866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.799405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.799432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.799448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.799672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.799890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.799924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.799937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.803575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.484 [2024-07-23 06:29:21.812693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.484 [2024-07-23 06:29:21.813162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.484 [2024-07-23 06:29:21.813193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.484 [2024-07-23 06:29:21.813211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.484 [2024-07-23 06:29:21.813477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.484 [2024-07-23 06:29:21.813743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.484 [2024-07-23 06:29:21.813765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.484 [2024-07-23 06:29:21.813778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.484 [2024-07-23 06:29:21.817348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.746 [2024-07-23 06:29:21.826695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.746 [2024-07-23 06:29:21.827152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-07-23 06:29:21.827180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.746 [2024-07-23 06:29:21.827195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.746 [2024-07-23 06:29:21.827448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.746 [2024-07-23 06:29:21.827717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.746 [2024-07-23 06:29:21.827739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.746 [2024-07-23 06:29:21.827753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.746 [2024-07-23 06:29:21.831351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.746 [2024-07-23 06:29:21.840724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.746 [2024-07-23 06:29:21.841171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-07-23 06:29:21.841202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.746 [2024-07-23 06:29:21.841219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.746 [2024-07-23 06:29:21.841457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.746 [2024-07-23 06:29:21.841719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.746 [2024-07-23 06:29:21.841741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.746 [2024-07-23 06:29:21.841754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.746 [2024-07-23 06:29:21.845294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.746 [2024-07-23 06:29:21.854738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.746 [2024-07-23 06:29:21.855189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-07-23 06:29:21.855220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.746 [2024-07-23 06:29:21.855238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.746 [2024-07-23 06:29:21.855477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.746 [2024-07-23 06:29:21.855737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.746 [2024-07-23 06:29:21.855759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.746 [2024-07-23 06:29:21.855772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.746 [2024-07-23 06:29:21.859314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.746 [2024-07-23 06:29:21.868607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.746 [2024-07-23 06:29:21.869041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-07-23 06:29:21.869071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.746 [2024-07-23 06:29:21.869089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.746 [2024-07-23 06:29:21.869328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.746 [2024-07-23 06:29:21.869570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.746 [2024-07-23 06:29:21.869593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.746 [2024-07-23 06:29:21.869624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.746 [2024-07-23 06:29:21.873206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.746 [2024-07-23 06:29:21.882494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.746 [2024-07-23 06:29:21.882954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-07-23 06:29:21.882980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.746 [2024-07-23 06:29:21.883011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.746 [2024-07-23 06:29:21.883269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.746 [2024-07-23 06:29:21.883512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.746 [2024-07-23 06:29:21.883535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.746 [2024-07-23 06:29:21.883549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.746 [2024-07-23 06:29:21.887136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:21.896428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:21.896864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:21.896895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:21.896913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:21.897151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:21.897394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:21.897417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:21.897432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:21.901017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:21.910301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:21.910792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:21.910840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:21.910858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:21.911097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:21.911339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:21.911362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:21.911377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:21.914961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:21.924267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:21.924749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:21.924776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:21.924792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:21.925036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:21.925279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:21.925302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:21.925318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:21.928908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:21.938211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:21.938653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:21.938694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:21.938709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:21.938972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:21.939215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:21.939239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:21.939254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:21.942844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:21.952141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:21.952575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:21.952606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:21.952635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:21.952875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:21.953118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:21.953141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:21.953156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:21.956746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:21.966063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:21.966499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:21.966531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:21.966549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:21.966806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:21.967050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:21.967074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:21.967089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:21.970675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:21.979968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:21.980398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:21.980429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:21.980447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:21.980699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:21.980943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:21.980967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:21.980982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:21.984557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:21.993888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:21.994341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:21.994368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:21.994399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:21.994663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:21.994907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:21.994930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:21.994945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:21.998522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:22.007822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:22.008252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:22.008283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:22.008301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:22.008540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:22.008796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:22.008820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:22.008841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:22.012421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.747 [2024-07-23 06:29:22.021723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.747 [2024-07-23 06:29:22.022188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-07-23 06:29:22.022220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.747 [2024-07-23 06:29:22.022238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.747 [2024-07-23 06:29:22.022477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.747 [2024-07-23 06:29:22.022734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.747 [2024-07-23 06:29:22.022758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.747 [2024-07-23 06:29:22.022773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.747 [2024-07-23 06:29:22.026351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.748 [2024-07-23 06:29:22.035664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.748 [2024-07-23 06:29:22.036127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-07-23 06:29:22.036158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.748 [2024-07-23 06:29:22.036175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.748 [2024-07-23 06:29:22.036414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.748 [2024-07-23 06:29:22.036670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.748 [2024-07-23 06:29:22.036694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.748 [2024-07-23 06:29:22.036709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.748 [2024-07-23 06:29:22.040286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.748 [2024-07-23 06:29:22.049585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.748 [2024-07-23 06:29:22.050022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-07-23 06:29:22.050053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.748 [2024-07-23 06:29:22.050071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.748 [2024-07-23 06:29:22.050310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.748 [2024-07-23 06:29:22.050553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.748 [2024-07-23 06:29:22.050576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.748 [2024-07-23 06:29:22.050590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.748 [2024-07-23 06:29:22.054181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.748 [2024-07-23 06:29:22.063478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.748 [2024-07-23 06:29:22.063913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-07-23 06:29:22.063950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.748 [2024-07-23 06:29:22.063969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.748 [2024-07-23 06:29:22.064207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.748 [2024-07-23 06:29:22.064450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.748 [2024-07-23 06:29:22.064473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.748 [2024-07-23 06:29:22.064488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.748 [2024-07-23 06:29:22.068081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.748 [2024-07-23 06:29:22.077376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.748 [2024-07-23 06:29:22.077840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-07-23 06:29:22.077871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:28.748 [2024-07-23 06:29:22.077888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:28.748 [2024-07-23 06:29:22.078127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:28.748 [2024-07-23 06:29:22.078370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.748 [2024-07-23 06:29:22.078393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.748 [2024-07-23 06:29:22.078408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.748 [2024-07-23 06:29:22.081994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.010 [2024-07-23 06:29:22.091297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.010 [2024-07-23 06:29:22.091731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-23 06:29:22.091762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.010 [2024-07-23 06:29:22.091780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.010 [2024-07-23 06:29:22.092019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.010 [2024-07-23 06:29:22.092262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.010 [2024-07-23 06:29:22.092285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.010 [2024-07-23 06:29:22.092301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.010 [2024-07-23 06:29:22.095894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.010 [2024-07-23 06:29:22.105191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.010 [2024-07-23 06:29:22.105631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-23 06:29:22.105672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.010 [2024-07-23 06:29:22.105688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.010 [2024-07-23 06:29:22.105936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.010 [2024-07-23 06:29:22.106185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.010 [2024-07-23 06:29:22.106209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.010 [2024-07-23 06:29:22.106224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.010 [2024-07-23 06:29:22.109810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.010 [2024-07-23 06:29:22.119102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.010 [2024-07-23 06:29:22.119574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-23 06:29:22.119605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.010 [2024-07-23 06:29:22.119635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.010 [2024-07-23 06:29:22.119876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.010 [2024-07-23 06:29:22.120118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.010 [2024-07-23 06:29:22.120141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.010 [2024-07-23 06:29:22.120156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.010 [2024-07-23 06:29:22.123742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.010 [2024-07-23 06:29:22.133040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.010 [2024-07-23 06:29:22.133489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-23 06:29:22.133537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.010 [2024-07-23 06:29:22.133555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.010 [2024-07-23 06:29:22.133807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.010 [2024-07-23 06:29:22.134050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.010 [2024-07-23 06:29:22.134073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.010 [2024-07-23 06:29:22.134088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.010 [2024-07-23 06:29:22.137676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.010 [2024-07-23 06:29:22.146985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.010 [2024-07-23 06:29:22.147420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-23 06:29:22.147451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.010 [2024-07-23 06:29:22.147468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.010 [2024-07-23 06:29:22.147719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.010 [2024-07-23 06:29:22.147963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.010 [2024-07-23 06:29:22.147986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.010 [2024-07-23 06:29:22.148001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.010 [2024-07-23 06:29:22.151588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.010 [2024-07-23 06:29:22.160898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.010 [2024-07-23 06:29:22.161352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.010 [2024-07-23 06:29:22.161382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.010 [2024-07-23 06:29:22.161400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.010 [2024-07-23 06:29:22.161648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.010 [2024-07-23 06:29:22.161892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.161915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.161930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.165509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.174834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.175292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.175323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.175341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.175580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.175835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.175859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.175874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.179451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.188748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.189232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.189259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.189289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.189538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.189803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.189827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.189843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.193420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.202717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.203188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.203213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.203233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.203501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.203758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.203781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.203796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.207374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.216676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.217133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.217163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.217180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.217419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.217674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.217699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.217714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.221295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.230596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.231020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.231051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.231069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.231307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.231549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.231572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.231587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.235177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.244467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.244906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.244937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.244956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.245194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.245437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.245465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.245481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.249073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.258374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.258815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.258846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.258863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.259103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.259346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.259369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.259383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.262971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.272263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.272694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.272726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.272744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.272983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.273225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.273248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.273263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.276852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.286043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.286514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.286545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.286563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.286816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.287066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.287090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.011 [2024-07-23 06:29:22.287105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.011 [2024-07-23 06:29:22.290682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.011 [2024-07-23 06:29:22.299907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.011 [2024-07-23 06:29:22.300340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.011 [2024-07-23 06:29:22.300370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.011 [2024-07-23 06:29:22.300387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.011 [2024-07-23 06:29:22.300626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.011 [2024-07-23 06:29:22.300862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.011 [2024-07-23 06:29:22.300883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.012 [2024-07-23 06:29:22.300913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.012 [2024-07-23 06:29:22.304357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.012 [2024-07-23 06:29:22.313325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.012 [2024-07-23 06:29:22.313718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-23 06:29:22.313747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.012 [2024-07-23 06:29:22.313763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.012 [2024-07-23 06:29:22.314005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.012 [2024-07-23 06:29:22.314210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.012 [2024-07-23 06:29:22.314230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.012 [2024-07-23 06:29:22.314242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.012 [2024-07-23 06:29:22.317260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.012 [2024-07-23 06:29:22.327125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.012 [2024-07-23 06:29:22.327568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-23 06:29:22.327599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.012 [2024-07-23 06:29:22.327625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.012 [2024-07-23 06:29:22.327885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.012 [2024-07-23 06:29:22.328143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.012 [2024-07-23 06:29:22.328167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.012 [2024-07-23 06:29:22.328182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.012 [2024-07-23 06:29:22.331681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.012 [2024-07-23 06:29:22.341067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.012 [2024-07-23 06:29:22.341524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.012 [2024-07-23 06:29:22.341555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.012 [2024-07-23 06:29:22.341572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.012 [2024-07-23 06:29:22.341851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.012 [2024-07-23 06:29:22.342098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.012 [2024-07-23 06:29:22.342122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.012 [2024-07-23 06:29:22.342137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.012 [2024-07-23 06:29:22.345726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.274 [2024-07-23 06:29:22.354993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.274 [2024-07-23 06:29:22.355451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.274 [2024-07-23 06:29:22.355482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.274 [2024-07-23 06:29:22.355499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.274 [2024-07-23 06:29:22.355754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.274 [2024-07-23 06:29:22.356009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.274 [2024-07-23 06:29:22.356033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.274 [2024-07-23 06:29:22.356048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.274 [2024-07-23 06:29:22.359676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.274 [2024-07-23 06:29:22.369049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.274 [2024-07-23 06:29:22.369481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.274 [2024-07-23 06:29:22.369512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.274 [2024-07-23 06:29:22.369530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.274 [2024-07-23 06:29:22.369782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.274 [2024-07-23 06:29:22.370026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.274 [2024-07-23 06:29:22.370049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.274 [2024-07-23 06:29:22.370063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.274 [2024-07-23 06:29:22.373577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.274 [2024-07-23 06:29:22.382598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.274 [2024-07-23 06:29:22.383029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.274 [2024-07-23 06:29:22.383057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.274 [2024-07-23 06:29:22.383087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.274 [2024-07-23 06:29:22.383314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.274 [2024-07-23 06:29:22.383527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.274 [2024-07-23 06:29:22.383547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.274 [2024-07-23 06:29:22.383566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.274 [2024-07-23 06:29:22.386617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.274 [2024-07-23 06:29:22.396465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.274 [2024-07-23 06:29:22.396964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.274 [2024-07-23 06:29:22.396991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.274 [2024-07-23 06:29:22.397023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.274 [2024-07-23 06:29:22.397272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.274 [2024-07-23 06:29:22.397516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.274 [2024-07-23 06:29:22.397539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.274 [2024-07-23 06:29:22.397554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.274 [2024-07-23 06:29:22.401100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.274 [2024-07-23 06:29:22.410388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.274 [2024-07-23 06:29:22.410866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.274 [2024-07-23 06:29:22.410908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.274 [2024-07-23 06:29:22.410925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.411171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.411414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.411437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.411453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.415269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.424307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.424782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.424810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.424827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.425073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.425316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.425339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.425354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.428905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.438117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.438570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.438599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.438625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.438860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.439116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.439140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.439155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.442699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.451993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.452478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.452520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.452536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.452805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.453049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.453072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.453087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.456675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.465970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.466407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.466443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.466460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.466711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.466955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.466979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.466995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.470579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.479886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.480354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.480385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.480403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.480654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.480914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.480937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.480952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.484530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.493841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.494295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.494326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.494344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.494582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.494835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.494859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.494874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.498453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.507752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.508211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.508237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.508253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.508495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.508752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.508777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.508792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.512370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.521671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.522147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.522173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.522203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.522447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.522713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.522737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.522753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.526334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.535644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.536089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.536120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.536138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.275 [2024-07-23 06:29:22.536375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.275 [2024-07-23 06:29:22.536630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.275 [2024-07-23 06:29:22.536653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.275 [2024-07-23 06:29:22.536669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.275 [2024-07-23 06:29:22.540247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.275 [2024-07-23 06:29:22.549537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.275 [2024-07-23 06:29:22.549991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.275 [2024-07-23 06:29:22.550026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.275 [2024-07-23 06:29:22.550059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.276 [2024-07-23 06:29:22.550298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.276 [2024-07-23 06:29:22.550540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.276 [2024-07-23 06:29:22.550563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.276 [2024-07-23 06:29:22.550578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.276 [2024-07-23 06:29:22.554167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.276 [2024-07-23 06:29:22.563464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.276 [2024-07-23 06:29:22.563918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.276 [2024-07-23 06:29:22.563967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.276 [2024-07-23 06:29:22.563985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.276 [2024-07-23 06:29:22.564224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.276 [2024-07-23 06:29:22.564466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.276 [2024-07-23 06:29:22.564489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.276 [2024-07-23 06:29:22.564504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.276 [2024-07-23 06:29:22.568225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.276 [2024-07-23 06:29:22.577307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.276 [2024-07-23 06:29:22.577750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.276 [2024-07-23 06:29:22.577787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.276 [2024-07-23 06:29:22.577806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.276 [2024-07-23 06:29:22.578045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.276 [2024-07-23 06:29:22.578288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.276 [2024-07-23 06:29:22.578312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.276 [2024-07-23 06:29:22.578326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.276 [2024-07-23 06:29:22.581916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.276 [2024-07-23 06:29:22.591230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.276 [2024-07-23 06:29:22.591688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.276 [2024-07-23 06:29:22.591720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.276 [2024-07-23 06:29:22.591738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.276 [2024-07-23 06:29:22.591976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.276 [2024-07-23 06:29:22.592218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.276 [2024-07-23 06:29:22.592241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.276 [2024-07-23 06:29:22.592257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.276 [2024-07-23 06:29:22.595844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.276 [2024-07-23 06:29:22.605131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.276 [2024-07-23 06:29:22.605538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.276 [2024-07-23 06:29:22.605570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.276 [2024-07-23 06:29:22.605588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.276 [2024-07-23 06:29:22.605843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.276 [2024-07-23 06:29:22.606087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.276 [2024-07-23 06:29:22.606110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.276 [2024-07-23 06:29:22.606125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.276 [2024-07-23 06:29:22.609712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.536 [2024-07-23 06:29:22.619020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.536 [2024-07-23 06:29:22.619508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.536 [2024-07-23 06:29:22.619556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.536 [2024-07-23 06:29:22.619573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.536 [2024-07-23 06:29:22.619824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.536 [2024-07-23 06:29:22.620078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.536 [2024-07-23 06:29:22.620102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.536 [2024-07-23 06:29:22.620117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.536 [2024-07-23 06:29:22.623709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.536 [2024-07-23 06:29:22.633009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.536 [2024-07-23 06:29:22.633447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.536 [2024-07-23 06:29:22.633478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.536 [2024-07-23 06:29:22.633496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.536 [2024-07-23 06:29:22.633746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.536 [2024-07-23 06:29:22.633990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.536 [2024-07-23 06:29:22.634013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.536 [2024-07-23 06:29:22.634028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.536 [2024-07-23 06:29:22.637608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.536 [2024-07-23 06:29:22.646905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.536 [2024-07-23 06:29:22.647374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.536 [2024-07-23 06:29:22.647405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.536 [2024-07-23 06:29:22.647423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.536 [2024-07-23 06:29:22.647675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.536 [2024-07-23 06:29:22.647919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.536 [2024-07-23 06:29:22.647942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.536 [2024-07-23 06:29:22.647957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.536 [2024-07-23 06:29:22.651539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.536 [2024-07-23 06:29:22.660861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.536 [2024-07-23 06:29:22.661368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.536 [2024-07-23 06:29:22.661395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.536 [2024-07-23 06:29:22.661410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.536 [2024-07-23 06:29:22.661677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.536 [2024-07-23 06:29:22.661921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.536 [2024-07-23 06:29:22.661944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.536 [2024-07-23 06:29:22.661960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.536 [2024-07-23 06:29:22.665538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.674843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.675301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.675332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.675350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.675588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.675842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.675866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.675881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.679461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.688759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.689222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.689252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.689270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.689508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.689763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.689787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.689801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.693380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.702679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.703108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.703139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.703157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.703395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.703651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.703675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.703690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.707272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.716564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.717025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.717055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.717078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.717318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.717560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.717584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.717599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.721188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.730486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.730936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.730978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.730994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.731246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.731489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.731512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.731527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.735113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.744405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.744853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.744879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.744894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.745108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.745367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.745390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.745404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.748993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.758289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.758729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.758756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.758772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.759021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.759265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.759293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.759308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.762899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.772194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.772649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.772681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.772699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.772938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.773181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.773205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.773221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.776812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.786107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.786675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.786706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.786724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.786963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.787206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.537 [2024-07-23 06:29:22.787229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.537 [2024-07-23 06:29:22.787244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.537 [2024-07-23 06:29:22.790854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.537 [2024-07-23 06:29:22.800151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.537 [2024-07-23 06:29:22.800587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.537 [2024-07-23 06:29:22.800626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.537 [2024-07-23 06:29:22.800646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.537 [2024-07-23 06:29:22.800886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.537 [2024-07-23 06:29:22.801129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.538 [2024-07-23 06:29:22.801152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.538 [2024-07-23 06:29:22.801167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.538 [2024-07-23 06:29:22.804755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.538 [2024-07-23 06:29:22.814043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.538 [2024-07-23 06:29:22.814503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.538 [2024-07-23 06:29:22.814534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.538 [2024-07-23 06:29:22.814552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.538 [2024-07-23 06:29:22.814804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.538 [2024-07-23 06:29:22.815048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.538 [2024-07-23 06:29:22.815071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.538 [2024-07-23 06:29:22.815086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.538 [2024-07-23 06:29:22.818671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.538 [2024-07-23 06:29:22.828001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.538 [2024-07-23 06:29:22.828410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.538 [2024-07-23 06:29:22.828442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.538 [2024-07-23 06:29:22.828460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.538 [2024-07-23 06:29:22.828715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.538 [2024-07-23 06:29:22.828959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.538 [2024-07-23 06:29:22.828983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.538 [2024-07-23 06:29:22.828998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.538 [2024-07-23 06:29:22.832582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.538 [2024-07-23 06:29:22.841905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.538 [2024-07-23 06:29:22.842306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.538 [2024-07-23 06:29:22.842337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.538 [2024-07-23 06:29:22.842355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.538 [2024-07-23 06:29:22.842594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.538 [2024-07-23 06:29:22.842847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.538 [2024-07-23 06:29:22.842870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.538 [2024-07-23 06:29:22.842886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.538 [2024-07-23 06:29:22.846460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.538 [2024-07-23 06:29:22.855817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.538 [2024-07-23 06:29:22.856277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.538 [2024-07-23 06:29:22.856307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.538 [2024-07-23 06:29:22.856325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.538 [2024-07-23 06:29:22.856569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.538 [2024-07-23 06:29:22.856824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.538 [2024-07-23 06:29:22.856846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.538 [2024-07-23 06:29:22.856860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.538 [2024-07-23 06:29:22.860475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.538 [2024-07-23 06:29:22.869912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.538 [2024-07-23 06:29:22.870446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.538 [2024-07-23 06:29:22.870478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.538 [2024-07-23 06:29:22.870496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.538 [2024-07-23 06:29:22.870754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.538 [2024-07-23 06:29:22.871000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.538 [2024-07-23 06:29:22.871023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.538 [2024-07-23 06:29:22.871038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.538 [2024-07-23 06:29:22.874650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.799 [2024-07-23 06:29:22.883798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.799 [2024-07-23 06:29:22.884249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.799 [2024-07-23 06:29:22.884298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.799 [2024-07-23 06:29:22.884315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.799 [2024-07-23 06:29:22.884554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.799 [2024-07-23 06:29:22.884809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.799 [2024-07-23 06:29:22.884833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.799 [2024-07-23 06:29:22.884848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.799 [2024-07-23 06:29:22.888457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.799 [2024-07-23 06:29:22.897779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.799 [2024-07-23 06:29:22.898223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.799 [2024-07-23 06:29:22.898253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.799 [2024-07-23 06:29:22.898271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.799 [2024-07-23 06:29:22.898509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.799 [2024-07-23 06:29:22.898765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.799 [2024-07-23 06:29:22.898790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.799 [2024-07-23 06:29:22.898811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.799 [2024-07-23 06:29:22.902396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.799 [2024-07-23 06:29:22.911725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.799 [2024-07-23 06:29:22.912191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.799 [2024-07-23 06:29:22.912221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.799 [2024-07-23 06:29:22.912239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.799 [2024-07-23 06:29:22.912477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.799 [2024-07-23 06:29:22.912742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.799 [2024-07-23 06:29:22.912764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.799 [2024-07-23 06:29:22.912777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.799 [2024-07-23 06:29:22.916401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.799 [2024-07-23 06:29:22.925823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.799 [2024-07-23 06:29:22.926283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.799 [2024-07-23 06:29:22.926318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.799 [2024-07-23 06:29:22.926336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:22.926574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:22.926832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:22.926855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:22.926868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:22.930494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:22.939855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:22.940319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:22.940350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:22.940367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:22.940606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:22.940858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:22.940892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:22.940907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:22.944488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:22.953816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:22.954283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:22.954337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:22.954355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:22.954594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:22.954849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:22.954873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:22.954888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:22.958475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:22.967796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:22.968252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:22.968283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:22.968300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:22.968539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:22.968791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:22.968815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:22.968831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:22.972406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:22.981714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:22.982201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:22.982249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:22.982267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:22.982506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:22.982758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:22.982782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:22.982797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:22.986380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:22.995682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:22.996119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:22.996149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:22.996167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:22.996405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:22.996665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:22.996699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:22.996714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:23.000325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:23.009625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:23.010097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:23.010128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:23.010146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:23.010384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:23.010638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:23.010663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:23.010678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:23.014251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:23.023544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:23.024007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:23.024037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:23.024055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:23.024293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:23.024536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:23.024560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:23.024575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:23.028160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:23.037453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:23.037928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:23.037959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:23.037977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:23.038215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:23.038458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:23.038481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:23.038496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:23.042090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:23.051379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:23.051858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:23.051889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:23.051907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:23.052146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:23.052388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.800 [2024-07-23 06:29:23.052412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.800 [2024-07-23 06:29:23.052426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.800 [2024-07-23 06:29:23.056014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.800 [2024-07-23 06:29:23.065322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.800 [2024-07-23 06:29:23.065777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.800 [2024-07-23 06:29:23.065808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.800 [2024-07-23 06:29:23.065826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.800 [2024-07-23 06:29:23.066065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.800 [2024-07-23 06:29:23.066308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.801 [2024-07-23 06:29:23.066331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.801 [2024-07-23 06:29:23.066346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.801 [2024-07-23 06:29:23.069931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.801 [2024-07-23 06:29:23.079225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.801 [2024-07-23 06:29:23.079663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.801 [2024-07-23 06:29:23.079694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.801 [2024-07-23 06:29:23.079711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.801 [2024-07-23 06:29:23.079950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.801 [2024-07-23 06:29:23.080193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.801 [2024-07-23 06:29:23.080217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.801 [2024-07-23 06:29:23.080232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.801 [2024-07-23 06:29:23.083815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.801 [2024-07-23 06:29:23.093104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.801 [2024-07-23 06:29:23.093550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.801 [2024-07-23 06:29:23.093581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.801 [2024-07-23 06:29:23.093604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.801 [2024-07-23 06:29:23.093854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.801 [2024-07-23 06:29:23.094097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.801 [2024-07-23 06:29:23.094121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.801 [2024-07-23 06:29:23.094135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.801 [2024-07-23 06:29:23.097721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.801 [2024-07-23 06:29:23.107014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.801 [2024-07-23 06:29:23.107443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.801 [2024-07-23 06:29:23.107473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.801 [2024-07-23 06:29:23.107491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.801 [2024-07-23 06:29:23.107741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.801 [2024-07-23 06:29:23.107985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.801 [2024-07-23 06:29:23.108008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.801 [2024-07-23 06:29:23.108023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.801 [2024-07-23 06:29:23.111601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.801 [2024-07-23 06:29:23.120898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.801 [2024-07-23 06:29:23.121354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.801 [2024-07-23 06:29:23.121385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.801 [2024-07-23 06:29:23.121403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.801 [2024-07-23 06:29:23.121652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.801 [2024-07-23 06:29:23.121896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.801 [2024-07-23 06:29:23.121919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.801 [2024-07-23 06:29:23.121934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.801 [2024-07-23 06:29:23.125513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.801 [2024-07-23 06:29:23.134812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.801 [2024-07-23 06:29:23.135224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.801 [2024-07-23 06:29:23.135254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:29.801 [2024-07-23 06:29:23.135272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:29.801 [2024-07-23 06:29:23.135511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:29.801 [2024-07-23 06:29:23.135764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.801 [2024-07-23 06:29:23.135794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.801 [2024-07-23 06:29:23.135809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.801 [2024-07-23 06:29:23.139387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.061 [2024-07-23 06:29:23.148694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.061 [2024-07-23 06:29:23.149153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.061 [2024-07-23 06:29:23.149184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.061 [2024-07-23 06:29:23.149201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.061 [2024-07-23 06:29:23.149440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.061 [2024-07-23 06:29:23.149695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.061 [2024-07-23 06:29:23.149719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.061 [2024-07-23 06:29:23.149734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.061 [2024-07-23 06:29:23.153310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.061 [2024-07-23 06:29:23.162604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.061 [2024-07-23 06:29:23.163042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.061 [2024-07-23 06:29:23.163073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.061 [2024-07-23 06:29:23.163091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.061 [2024-07-23 06:29:23.163329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.061 [2024-07-23 06:29:23.163571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.061 [2024-07-23 06:29:23.163595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.061 [2024-07-23 06:29:23.163609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.061 [2024-07-23 06:29:23.167197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.061 [2024-07-23 06:29:23.176486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.061 [2024-07-23 06:29:23.176952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.061 [2024-07-23 06:29:23.176983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.061 [2024-07-23 06:29:23.177000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.061 [2024-07-23 06:29:23.177239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.061 [2024-07-23 06:29:23.177482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.061 [2024-07-23 06:29:23.177505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.061 [2024-07-23 06:29:23.177519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.061 [2024-07-23 06:29:23.181107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.061 [2024-07-23 06:29:23.190400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.061 [2024-07-23 06:29:23.190846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.190877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.190895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.191133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.191376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.191399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.191414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.195001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.204291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.204721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.204753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.204770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.205010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.205252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.205276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.205291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.208896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.218192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.218634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.218665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.218683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.218922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.219165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.219188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.219202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.222789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.232088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.232545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.232576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.232599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.232849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.233092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.233116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.233131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.236714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.246001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.246431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.246462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.246479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.246729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.246973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.246996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.247011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.250590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.259886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.260341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.260371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.260388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.260637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.260881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.260904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.260919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.264495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.273805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.274216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.274247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.274264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.274503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.274757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.274786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.274803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.278378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.287676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.288134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.288164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.288182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.288421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.288675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.288699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.288713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.292289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.301576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.302040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.302071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.302089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.302328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.302570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.302593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.302608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.306197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.315488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.315929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.315959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.062 [2024-07-23 06:29:23.315977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.062 [2024-07-23 06:29:23.316215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.062 [2024-07-23 06:29:23.316458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.062 [2024-07-23 06:29:23.316481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.062 [2024-07-23 06:29:23.316496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.062 [2024-07-23 06:29:23.320082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.062 [2024-07-23 06:29:23.329372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.062 [2024-07-23 06:29:23.329827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.062 [2024-07-23 06:29:23.329858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.063 [2024-07-23 06:29:23.329875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.063 [2024-07-23 06:29:23.330113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.063 [2024-07-23 06:29:23.330356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.063 [2024-07-23 06:29:23.330379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.063 [2024-07-23 06:29:23.330395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.063 [2024-07-23 06:29:23.333980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.063 [2024-07-23 06:29:23.343268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.063 [2024-07-23 06:29:23.343701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.063 [2024-07-23 06:29:23.343732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.063 [2024-07-23 06:29:23.343750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.063 [2024-07-23 06:29:23.343989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.063 [2024-07-23 06:29:23.344231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.063 [2024-07-23 06:29:23.344255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.063 [2024-07-23 06:29:23.344269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.063 [2024-07-23 06:29:23.347855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.063 [2024-07-23 06:29:23.357156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.063 [2024-07-23 06:29:23.357595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.063 [2024-07-23 06:29:23.357632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.063 [2024-07-23 06:29:23.357651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.063 [2024-07-23 06:29:23.357891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.063 [2024-07-23 06:29:23.358134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.063 [2024-07-23 06:29:23.358157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.063 [2024-07-23 06:29:23.358172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.063 [2024-07-23 06:29:23.361756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.063 [2024-07-23 06:29:23.371047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.063 [2024-07-23 06:29:23.371510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.063 [2024-07-23 06:29:23.371541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.063 [2024-07-23 06:29:23.371558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.063 [2024-07-23 06:29:23.371813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.063 [2024-07-23 06:29:23.372056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.063 [2024-07-23 06:29:23.372079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.063 [2024-07-23 06:29:23.372094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.063 [2024-07-23 06:29:23.375678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.063 [2024-07-23 06:29:23.384972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.063 [2024-07-23 06:29:23.385430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.063 [2024-07-23 06:29:23.385460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.063 [2024-07-23 06:29:23.385477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.063 [2024-07-23 06:29:23.385728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.063 [2024-07-23 06:29:23.385971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.063 [2024-07-23 06:29:23.385995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.063 [2024-07-23 06:29:23.386010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.063 [2024-07-23 06:29:23.389588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.063 [2024-07-23 06:29:23.398903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.063 [2024-07-23 06:29:23.399337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.063 [2024-07-23 06:29:23.399368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.063 [2024-07-23 06:29:23.399387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.063 [2024-07-23 06:29:23.399635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.063 [2024-07-23 06:29:23.399879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.063 [2024-07-23 06:29:23.399903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.063 [2024-07-23 06:29:23.399918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.063 [2024-07-23 06:29:23.403497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.323 [2024-07-23 06:29:23.412512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.323 [2024-07-23 06:29:23.412979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-07-23 06:29:23.413007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.323 [2024-07-23 06:29:23.413024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.323 [2024-07-23 06:29:23.413278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.323 [2024-07-23 06:29:23.413478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.323 [2024-07-23 06:29:23.413497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.323 [2024-07-23 06:29:23.413515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.323 [2024-07-23 06:29:23.416628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.323 [2024-07-23 06:29:23.425996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.323 [2024-07-23 06:29:23.426446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-07-23 06:29:23.426473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.323 [2024-07-23 06:29:23.426507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.323 [2024-07-23 06:29:23.426777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.323 [2024-07-23 06:29:23.427001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.323 [2024-07-23 06:29:23.427021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.323 [2024-07-23 06:29:23.427034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.323 [2024-07-23 06:29:23.430111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.439237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.439650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.439679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.439696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.439949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.440149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.440168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.440180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.443166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.452450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.452946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.452974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.452990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.453232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.453446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.453465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.453478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.456490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.465836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.466556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.466611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.466639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.466878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.467097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.467116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.467128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.470125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.479113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.479541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.479570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.479601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.479855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.480073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.480092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.480105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.483110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.492435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.492880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.492912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.492928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.493181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.493381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.493400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.493412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.496413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.505734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.506153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.506181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.506197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.506427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.506684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.506706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.506720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.509770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.519068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.519499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.519526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.519542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.519802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.520023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.520043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.520055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.523054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.532265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.532805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.532834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.532850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.533109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.533309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.533327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.533340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.536329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.545629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.546019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.546046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.546061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.546296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.546495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.546514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.546526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.549506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.559044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.324 [2024-07-23 06:29:23.559512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-07-23 06:29:23.559538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.324 [2024-07-23 06:29:23.559568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.324 [2024-07-23 06:29:23.559833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.324 [2024-07-23 06:29:23.560051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.324 [2024-07-23 06:29:23.560071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.324 [2024-07-23 06:29:23.560083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.324 [2024-07-23 06:29:23.563070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.324 [2024-07-23 06:29:23.572351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.325 [2024-07-23 06:29:23.572804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-07-23 06:29:23.572832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.325 [2024-07-23 06:29:23.572848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.325 [2024-07-23 06:29:23.573093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.325 [2024-07-23 06:29:23.573307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.325 [2024-07-23 06:29:23.573326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.325 [2024-07-23 06:29:23.573339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.325 [2024-07-23 06:29:23.576323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.325 [2024-07-23 06:29:23.585645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.325 [2024-07-23 06:29:23.586139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-07-23 06:29:23.586166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.325 [2024-07-23 06:29:23.586182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.325 [2024-07-23 06:29:23.586436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.325 [2024-07-23 06:29:23.586664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.325 [2024-07-23 06:29:23.586684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.325 [2024-07-23 06:29:23.586697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.325 [2024-07-23 06:29:23.589679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.325 [2024-07-23 06:29:23.599001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.325 [2024-07-23 06:29:23.599434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-07-23 06:29:23.599462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.325 [2024-07-23 06:29:23.599483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.325 [2024-07-23 06:29:23.599734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.325 [2024-07-23 06:29:23.599955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.325 [2024-07-23 06:29:23.599974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.325 [2024-07-23 06:29:23.599986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.325 [2024-07-23 06:29:23.602973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.325 [2024-07-23 06:29:23.612287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.325 [2024-07-23 06:29:23.612765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-07-23 06:29:23.612794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.325 [2024-07-23 06:29:23.612810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.325 [2024-07-23 06:29:23.613065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.325 [2024-07-23 06:29:23.613264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.325 [2024-07-23 06:29:23.613283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.325 [2024-07-23 06:29:23.613296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.325 [2024-07-23 06:29:23.616285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.325 [2024-07-23 06:29:23.625580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.325 [2024-07-23 06:29:23.626081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-07-23 06:29:23.626109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.325 [2024-07-23 06:29:23.626125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.325 [2024-07-23 06:29:23.626366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.325 [2024-07-23 06:29:23.626580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.325 [2024-07-23 06:29:23.626599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.325 [2024-07-23 06:29:23.626612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.325 [2024-07-23 06:29:23.629605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.325 [2024-07-23 06:29:23.638910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.325 [2024-07-23 06:29:23.639357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-07-23 06:29:23.639385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.325 [2024-07-23 06:29:23.639401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.325 [2024-07-23 06:29:23.639650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.325 [2024-07-23 06:29:23.639856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.325 [2024-07-23 06:29:23.639881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.325 [2024-07-23 06:29:23.639894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.325 [2024-07-23 06:29:23.642895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.325 [2024-07-23 06:29:23.652181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.325 [2024-07-23 06:29:23.652637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-07-23 06:29:23.652665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.325 [2024-07-23 06:29:23.652681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.325 [2024-07-23 06:29:23.652922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.325 [2024-07-23 06:29:23.653137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.325 [2024-07-23 06:29:23.653157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.325 [2024-07-23 06:29:23.653169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.325 [2024-07-23 06:29:23.656279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.325 [2024-07-23 06:29:23.665804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.325 [2024-07-23 06:29:23.666270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-07-23 06:29:23.666297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.325 [2024-07-23 06:29:23.666314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.325 [2024-07-23 06:29:23.666568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.325 [2024-07-23 06:29:23.666795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.325 [2024-07-23 06:29:23.666816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.325 [2024-07-23 06:29:23.666829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.669870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.679141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.679588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.679621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.679639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.679882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.680098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.680118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.680130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.683115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.692388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.692796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.692837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.692852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.693122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.693321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.693340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.693353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.696261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.705719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.706187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.706229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.706245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.706487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.706732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.706752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.706765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.709791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.719100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.719596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.719631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.719648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.719886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.720102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.720121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.720133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.723121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.732405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.732817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.732846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.732862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.733115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.733315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.733334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.733346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.736333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.745633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.746118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.746159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.746175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.746410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.746636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.746658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.746672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.749667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.758962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.759445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.759487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.759503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.759754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.759974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.759993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.760005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.762991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.772262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.772798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.772826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.772842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.773097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.773296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.773315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.773332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.776323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.785618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.786054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.786081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.786112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.786346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.786545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.786564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.786577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.789561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.798822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.799229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.799270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.799284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.799533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.799762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.799783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.799796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.802780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.812108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.587 [2024-07-23 06:29:23.812490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.587 [2024-07-23 06:29:23.812517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.587 [2024-07-23 06:29:23.812532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.587 [2024-07-23 06:29:23.812780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.587 [2024-07-23 06:29:23.812999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.587 [2024-07-23 06:29:23.813019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.587 [2024-07-23 06:29:23.813031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.587 [2024-07-23 06:29:23.816017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.587 [2024-07-23 06:29:23.825455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.588 [2024-07-23 06:29:23.825954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.588 [2024-07-23 06:29:23.825996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.588 [2024-07-23 06:29:23.826012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.588 [2024-07-23 06:29:23.826267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.588 [2024-07-23 06:29:23.826466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.588 [2024-07-23 06:29:23.826485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.588 [2024-07-23 06:29:23.826497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.588 [2024-07-23 06:29:23.829569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.588 [2024-07-23 06:29:23.838746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.588 [2024-07-23 06:29:23.839187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.588 [2024-07-23 06:29:23.839215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.588 [2024-07-23 06:29:23.839231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.588 [2024-07-23 06:29:23.839486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.588 [2024-07-23 06:29:23.839713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.588 [2024-07-23 06:29:23.839734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.588 [2024-07-23 06:29:23.839746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.588 [2024-07-23 06:29:23.842732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.588 [2024-07-23 06:29:23.852023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.588 [2024-07-23 06:29:23.852458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.588 [2024-07-23 06:29:23.852486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.588 [2024-07-23 06:29:23.852502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.588 [2024-07-23 06:29:23.852727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.588 [2024-07-23 06:29:23.852974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.588 [2024-07-23 06:29:23.852993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.588 [2024-07-23 06:29:23.853006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.588 [2024-07-23 06:29:23.855990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.588 [2024-07-23 06:29:23.865262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.588 [2024-07-23 06:29:23.865653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.588 [2024-07-23 06:29:23.865695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.588 [2024-07-23 06:29:23.865712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.588 [2024-07-23 06:29:23.865950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.588 [2024-07-23 06:29:23.866167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.588 [2024-07-23 06:29:23.866186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.588 [2024-07-23 06:29:23.866199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.588 [2024-07-23 06:29:23.869187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.588 [2024-07-23 06:29:23.878467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.588 [2024-07-23 06:29:23.878901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.588 [2024-07-23 06:29:23.878943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.588 [2024-07-23 06:29:23.878958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.588 [2024-07-23 06:29:23.879205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.588 [2024-07-23 06:29:23.879404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.588 [2024-07-23 06:29:23.879423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.588 [2024-07-23 06:29:23.879435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.588 [2024-07-23 06:29:23.882423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.588 [2024-07-23 06:29:23.891714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.588 [2024-07-23 06:29:23.892189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.588 [2024-07-23 06:29:23.892231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.588 [2024-07-23 06:29:23.892247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.588 [2024-07-23 06:29:23.892497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.588 [2024-07-23 06:29:23.892727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.588 [2024-07-23 06:29:23.892748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.588 [2024-07-23 06:29:23.892761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.588 [2024-07-23 06:29:23.895744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.588 [2024-07-23 06:29:23.905034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.588 [2024-07-23 06:29:23.905400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.588 [2024-07-23 06:29:23.905427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.588 [2024-07-23 06:29:23.905442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.588 [2024-07-23 06:29:23.905683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.588 [2024-07-23 06:29:23.905890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.588 [2024-07-23 06:29:23.905910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.588 [2024-07-23 06:29:23.905942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.588 [2024-07-23 06:29:23.909050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.588 [2024-07-23 06:29:23.918385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.588 [2024-07-23 06:29:23.918802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.588 [2024-07-23 06:29:23.918830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.588 [2024-07-23 06:29:23.918846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.588 [2024-07-23 06:29:23.919086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.588 [2024-07-23 06:29:23.919286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.588 [2024-07-23 06:29:23.919305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.588 [2024-07-23 06:29:23.919318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.588 [2024-07-23 06:29:23.922303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.850 [2024-07-23 06:29:23.931779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.850 [2024-07-23 06:29:23.932250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.850 [2024-07-23 06:29:23.932291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.850 [2024-07-23 06:29:23.932308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.850 [2024-07-23 06:29:23.932561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.850 [2024-07-23 06:29:23.932792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.850 [2024-07-23 06:29:23.932813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.850 [2024-07-23 06:29:23.932826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.850 [2024-07-23 06:29:23.935922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.850 [2024-07-23 06:29:23.945034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.850 [2024-07-23 06:29:23.945420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.850 [2024-07-23 06:29:23.945461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.850 [2024-07-23 06:29:23.945476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.850 [2024-07-23 06:29:23.945739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.850 [2024-07-23 06:29:23.945960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.850 [2024-07-23 06:29:23.945979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.850 [2024-07-23 06:29:23.945992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.850 [2024-07-23 06:29:23.948978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.850 [2024-07-23 06:29:23.958270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.850 [2024-07-23 06:29:23.958753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.850 [2024-07-23 06:29:23.958801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.850 [2024-07-23 06:29:23.958818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.850 [2024-07-23 06:29:23.959073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.850 [2024-07-23 06:29:23.959272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.850 [2024-07-23 06:29:23.959291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.850 [2024-07-23 06:29:23.959304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.850 [2024-07-23 06:29:23.962290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.850 [2024-07-23 06:29:23.971658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.850 [2024-07-23 06:29:23.972058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.850 [2024-07-23 06:29:23.972098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.850 [2024-07-23 06:29:23.972113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.850 [2024-07-23 06:29:23.972361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.850 [2024-07-23 06:29:23.972580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.850 [2024-07-23 06:29:23.972600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.850 [2024-07-23 06:29:23.972621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.850 [2024-07-23 06:29:23.975824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.850 [2024-07-23 06:29:23.984969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.850 [2024-07-23 06:29:23.985417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.850 [2024-07-23 06:29:23.985458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.850 [2024-07-23 06:29:23.985475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.850 [2024-07-23 06:29:23.985751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.850 [2024-07-23 06:29:23.985972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.850 [2024-07-23 06:29:23.985991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.850 [2024-07-23 06:29:23.986004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.850 [2024-07-23 06:29:23.988994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.850 [2024-07-23 06:29:23.998278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.850 [2024-07-23 06:29:23.998748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.850 [2024-07-23 06:29:23.998777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.850 [2024-07-23 06:29:23.998793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.850 [2024-07-23 06:29:23.999034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.850 [2024-07-23 06:29:23.999242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.850 [2024-07-23 06:29:23.999267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.850 [2024-07-23 06:29:23.999279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.850 [2024-07-23 06:29:24.002298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.850 [2024-07-23 06:29:24.011636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.850 [2024-07-23 06:29:24.012078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.850 [2024-07-23 06:29:24.012120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.850 [2024-07-23 06:29:24.012137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.850 [2024-07-23 06:29:24.012377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.850 [2024-07-23 06:29:24.012586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.850 [2024-07-23 06:29:24.012632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.850 [2024-07-23 06:29:24.012647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.850 [2024-07-23 06:29:24.015667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.850 [2024-07-23 06:29:24.025100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.850 [2024-07-23 06:29:24.025554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.850 [2024-07-23 06:29:24.025596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.025621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.025866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.026082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.026102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.026114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.029151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.038430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.038874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.038903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.038919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.039165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.039380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.039400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.039413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.042455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.051769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.052209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.052236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.052267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.052523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.052750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.052771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.052784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.055770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.065086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.065515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.065542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.065557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.065827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.066046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.066065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.066078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.069062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.079152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.079558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.079588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.079606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.079855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.080098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.080121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.080136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.083732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.093051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.093506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.093537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.093560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.093811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.094056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.094079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.094094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.097679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.106977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.107448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.107479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.107497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.107744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.107988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.108011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.108026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.111607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.120917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.121429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.121460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.121477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.121726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.121970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.121993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.122007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.125588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.134898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.135350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.135380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.135397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.135645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.135888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.135917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.135933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.139514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.148817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.149247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.149278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.149296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.851 [2024-07-23 06:29:24.149536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.851 [2024-07-23 06:29:24.149789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.851 [2024-07-23 06:29:24.149813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.851 [2024-07-23 06:29:24.149828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.851 [2024-07-23 06:29:24.153405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.851 [2024-07-23 06:29:24.162707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.851 [2024-07-23 06:29:24.163165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.851 [2024-07-23 06:29:24.163195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.851 [2024-07-23 06:29:24.163213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.852 [2024-07-23 06:29:24.163452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.852 [2024-07-23 06:29:24.163707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.852 [2024-07-23 06:29:24.163731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.852 [2024-07-23 06:29:24.163746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.852 [2024-07-23 06:29:24.167323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.852 [2024-07-23 06:29:24.176626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.852 [2024-07-23 06:29:24.177061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.852 [2024-07-23 06:29:24.177092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.852 [2024-07-23 06:29:24.177110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.852 [2024-07-23 06:29:24.177348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.852 [2024-07-23 06:29:24.177591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.852 [2024-07-23 06:29:24.177623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.852 [2024-07-23 06:29:24.177640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.852 [2024-07-23 06:29:24.181217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:30.852 [2024-07-23 06:29:24.190518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.852 [2024-07-23 06:29:24.190966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.852 [2024-07-23 06:29:24.190997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:30.852 [2024-07-23 06:29:24.191015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:30.852 [2024-07-23 06:29:24.191253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:30.852 [2024-07-23 06:29:24.191496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.852 [2024-07-23 06:29:24.191519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.852 [2024-07-23 06:29:24.191533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.124 [2024-07-23 06:29:24.195152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.124 [2024-07-23 06:29:24.204455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.124 [2024-07-23 06:29:24.204918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-07-23 06:29:24.204949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.124 [2024-07-23 06:29:24.204967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.124 [2024-07-23 06:29:24.205206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.124 [2024-07-23 06:29:24.205449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.124 [2024-07-23 06:29:24.205472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.124 [2024-07-23 06:29:24.205487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.124 [2024-07-23 06:29:24.209072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.124 [2024-07-23 06:29:24.218365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.124 [2024-07-23 06:29:24.218828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-07-23 06:29:24.218858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.124 [2024-07-23 06:29:24.218876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.124 [2024-07-23 06:29:24.219114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.124 [2024-07-23 06:29:24.219357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.124 [2024-07-23 06:29:24.219380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.124 [2024-07-23 06:29:24.219395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.124 [2024-07-23 06:29:24.222980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.124 [2024-07-23 06:29:24.232275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.124 [2024-07-23 06:29:24.232704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-07-23 06:29:24.232735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.124 [2024-07-23 06:29:24.232754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.232998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.233241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.233264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.233279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.236865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.246163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.246593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.246635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.246671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.246917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.247160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.247184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.247199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.250785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.260080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.260534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.260564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.260582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.260831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.261075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.261099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.261114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.264706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.274021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.274430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.274461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.274480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.274732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.274976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.275000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.275020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.278600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.287910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.288375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.288406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.288423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.288674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.288918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.288941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.288956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.292538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.301849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.302310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.302341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.302359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.302597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.302852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.302888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.302903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.306486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.315801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.316269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.316300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.316317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.316556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.316809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.316834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.316849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.320430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.329749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.330183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.330214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.330231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.330469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.330724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.330749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.330763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.334342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.343652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.344104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.344134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.344151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.344389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.344644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.344668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.344683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.348263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.357570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.358008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.358039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.358057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.125 [2024-07-23 06:29:24.358295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.125 [2024-07-23 06:29:24.358538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.125 [2024-07-23 06:29:24.358561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.125 [2024-07-23 06:29:24.358576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.125 [2024-07-23 06:29:24.362198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.125 [2024-07-23 06:29:24.371501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.125 [2024-07-23 06:29:24.371937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-07-23 06:29:24.371968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.125 [2024-07-23 06:29:24.371986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.126 [2024-07-23 06:29:24.372224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.126 [2024-07-23 06:29:24.372473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.126 [2024-07-23 06:29:24.372496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.126 [2024-07-23 06:29:24.372511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.126 [2024-07-23 06:29:24.376102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1888688 Killed "${NVMF_APP[@]}" "$@" 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1889635 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1889635 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1889635 ']' 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:31.126 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.126 [2024-07-23 06:29:24.385409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.126 [2024-07-23 06:29:24.385881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-07-23 06:29:24.385914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.126 [2024-07-23 06:29:24.385931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.126 [2024-07-23 06:29:24.386173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.126 [2024-07-23 06:29:24.386416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.126 [2024-07-23 06:29:24.386440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.126 [2024-07-23 06:29:24.386454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.126 [2024-07-23 06:29:24.390044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.126 [2024-07-23 06:29:24.399356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.126 [2024-07-23 06:29:24.399792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-07-23 06:29:24.399823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.126 [2024-07-23 06:29:24.399840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.126 [2024-07-23 06:29:24.400084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.126 [2024-07-23 06:29:24.400328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.126 [2024-07-23 06:29:24.400351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.126 [2024-07-23 06:29:24.400366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.126 [2024-07-23 06:29:24.403958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.126 [2024-07-23 06:29:24.413264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.126 [2024-07-23 06:29:24.413703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-07-23 06:29:24.413734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.126 [2024-07-23 06:29:24.413752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.126 [2024-07-23 06:29:24.413992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.126 [2024-07-23 06:29:24.414237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.126 [2024-07-23 06:29:24.414260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.126 [2024-07-23 06:29:24.414276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.126 [2024-07-23 06:29:24.417870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.126 [2024-07-23 06:29:24.427201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.126 [2024-07-23 06:29:24.427672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-07-23 06:29:24.427704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.126 [2024-07-23 06:29:24.427722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.126 [2024-07-23 06:29:24.427962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.126 [2024-07-23 06:29:24.428205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.126 [2024-07-23 06:29:24.428229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.126 [2024-07-23 06:29:24.428244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.126 [2024-07-23 06:29:24.431842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.126 [2024-07-23 06:29:24.433244] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:31.126 [2024-07-23 06:29:24.433312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.126 [2024-07-23 06:29:24.440663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.126 [2024-07-23 06:29:24.441113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-07-23 06:29:24.441154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.126 [2024-07-23 06:29:24.441170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.126 [2024-07-23 06:29:24.441422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.126 [2024-07-23 06:29:24.441659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.126 [2024-07-23 06:29:24.441682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.126 [2024-07-23 06:29:24.441695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.126 [2024-07-23 06:29:24.444772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.126 [2024-07-23 06:29:24.454171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.126 [2024-07-23 06:29:24.454643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-07-23 06:29:24.454681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.126 [2024-07-23 06:29:24.454707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.126 [2024-07-23 06:29:24.454944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.126 [2024-07-23 06:29:24.455160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.126 [2024-07-23 06:29:24.455179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.126 [2024-07-23 06:29:24.455192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.126 [2024-07-23 06:29:24.458211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.126 [2024-07-23 06:29:24.467548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.386 [2024-07-23 06:29:24.468029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.386 [2024-07-23 06:29:24.468072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.386 [2024-07-23 06:29:24.468088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.386 [2024-07-23 06:29:24.468328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.386 [2024-07-23 06:29:24.468548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.386 [2024-07-23 06:29:24.468583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.386 [2024-07-23 06:29:24.468596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.386 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.386 [2024-07-23 06:29:24.471631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.386 [2024-07-23 06:29:24.474694] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:31.386 [2024-07-23 06:29:24.481507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.386 [2024-07-23 06:29:24.481959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.386 [2024-07-23 06:29:24.482001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.386 [2024-07-23 06:29:24.482016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.386 [2024-07-23 06:29:24.482274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.386 [2024-07-23 06:29:24.482518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.386 [2024-07-23 06:29:24.482547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.386 [2024-07-23 06:29:24.482563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.386 [2024-07-23 06:29:24.486133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.386 [2024-07-23 06:29:24.495460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.386 [2024-07-23 06:29:24.495926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.386 [2024-07-23 06:29:24.495967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.386 [2024-07-23 06:29:24.495982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.386 [2024-07-23 06:29:24.496253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.386 [2024-07-23 06:29:24.496497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.386 [2024-07-23 06:29:24.496520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.386 [2024-07-23 06:29:24.496535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.386 [2024-07-23 06:29:24.500034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.386 [2024-07-23 06:29:24.504442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:31.386 [2024-07-23 06:29:24.509349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.386 [2024-07-23 06:29:24.509863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.386 [2024-07-23 06:29:24.509907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.386 [2024-07-23 06:29:24.509924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.386 [2024-07-23 06:29:24.510184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.386 [2024-07-23 06:29:24.510430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.386 [2024-07-23 06:29:24.510453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.386 [2024-07-23 06:29:24.510469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.386 [2024-07-23 06:29:24.514014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.386 [2024-07-23 06:29:24.523119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.386 [2024-07-23 06:29:24.523740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.386 [2024-07-23 06:29:24.523776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.386 [2024-07-23 06:29:24.523810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.386 [2024-07-23 06:29:24.524061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.386 [2024-07-23 06:29:24.524307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.386 [2024-07-23 06:29:24.524331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.386 [2024-07-23 06:29:24.524349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.386 [2024-07-23 06:29:24.527869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.386 [2024-07-23 06:29:24.536995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.386 [2024-07-23 06:29:24.537429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.386 [2024-07-23 06:29:24.537457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.386 [2024-07-23 06:29:24.537472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.386 [2024-07-23 06:29:24.537725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.386 [2024-07-23 06:29:24.537966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.386 [2024-07-23 06:29:24.537990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.386 [2024-07-23 06:29:24.538005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.386 [2024-07-23 06:29:24.541518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.386 [2024-07-23 06:29:24.551009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.386 [2024-07-23 06:29:24.551561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.386 [2024-07-23 06:29:24.551605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.386 [2024-07-23 06:29:24.551631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.386 [2024-07-23 06:29:24.551876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.552134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.552158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.552174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.555702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.564825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.565406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.565442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.565461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.565734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.565952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.565988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.566003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.569078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.578300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.578766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.578796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.578821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.579065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.579271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.579292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.579305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.582381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.591624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.592071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.592099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.592132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.592374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.592580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.592600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.592622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.595445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.387 [2024-07-23 06:29:24.595475] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.387 [2024-07-23 06:29:24.595504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.387 [2024-07-23 06:29:24.595515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.387 [2024-07-23 06:29:24.595525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.387 [2024-07-23 06:29:24.595744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.598634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:31.387 [2024-07-23 06:29:24.598698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:31.387 [2024-07-23 06:29:24.598702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.387 [2024-07-23 06:29:24.605190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.605738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.605773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.605792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.606030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.606246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.606267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.606282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.609474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.618867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.619439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.619491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.619512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.619778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.620016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.620038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.620054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.623325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.632549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.633165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.633202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.633222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.633461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.633705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.633729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.633746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.637042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.646195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.646780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.646820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.646840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.647078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.647295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.647316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.647332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.650465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.659776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.660297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.660346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.660379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.660620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.660836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.660857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.660872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.664070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.387 [2024-07-23 06:29:24.673371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.387 [2024-07-23 06:29:24.673916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.387 [2024-07-23 06:29:24.673953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.387 [2024-07-23 06:29:24.673981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.387 [2024-07-23 06:29:24.674217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.387 [2024-07-23 06:29:24.674433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.387 [2024-07-23 06:29:24.674454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.387 [2024-07-23 06:29:24.674471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.387 [2024-07-23 06:29:24.677574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.388 [2024-07-23 06:29:24.686946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.388 [2024-07-23 06:29:24.687384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.388 [2024-07-23 06:29:24.687411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.388 [2024-07-23 06:29:24.687428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.388 [2024-07-23 06:29:24.687669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.388 [2024-07-23 06:29:24.687883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.388 [2024-07-23 06:29:24.687904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.388 [2024-07-23 06:29:24.687917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.388 [2024-07-23 06:29:24.691111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.388 [2024-07-23 06:29:24.700540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.388 [2024-07-23 06:29:24.700967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.388 [2024-07-23 06:29:24.700996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.388 [2024-07-23 06:29:24.701013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.388 [2024-07-23 06:29:24.701228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.388 [2024-07-23 06:29:24.701448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.388 [2024-07-23 06:29:24.701476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.388 [2024-07-23 06:29:24.701491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.388 [2024-07-23 06:29:24.704746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.388 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:31.388 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:31.388 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:31.388 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:31.388 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.388 [2024-07-23 06:29:24.714191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.388 [2024-07-23 06:29:24.714647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.388 [2024-07-23 06:29:24.714676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.388 [2024-07-23 06:29:24.714692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.388 [2024-07-23 06:29:24.714922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.388 [2024-07-23 06:29:24.715134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.388 [2024-07-23 06:29:24.715155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.388 [2024-07-23 06:29:24.715168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.388 [2024-07-23 06:29:24.718393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.388 [2024-07-23 06:29:24.727747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.388 [2024-07-23 06:29:24.728172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.388 [2024-07-23 06:29:24.728199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.388 [2024-07-23 06:29:24.728216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.388 [2024-07-23 06:29:24.728443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.388 [2024-07-23 06:29:24.728694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.388 [2024-07-23 06:29:24.728716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.388 [2024-07-23 06:29:24.728731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.649 [2024-07-23 06:29:24.731950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.649 [2024-07-23 06:29:24.740494] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.649 [2024-07-23 06:29:24.741232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.649 [2024-07-23 06:29:24.741627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.649 [2024-07-23 06:29:24.741656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.649 [2024-07-23 06:29:24.741677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.649 [2024-07-23 06:29:24.741893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.649 [2024-07-23 06:29:24.742121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.649 [2024-07-23 06:29:24.742141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.649 [2024-07-23 06:29:24.742154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.649 [2024-07-23 06:29:24.745387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.649 [2024-07-23 06:29:24.754798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.649 [2024-07-23 06:29:24.755195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.649 [2024-07-23 06:29:24.755223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.649 [2024-07-23 06:29:24.755239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.649 [2024-07-23 06:29:24.755454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.649 [2024-07-23 06:29:24.755682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.649 [2024-07-23 06:29:24.755705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.649 [2024-07-23 06:29:24.755719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.649 [2024-07-23 06:29:24.759007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.649 [2024-07-23 06:29:24.768450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.649 [2024-07-23 06:29:24.769016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.649 [2024-07-23 06:29:24.769055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.649 [2024-07-23 06:29:24.769074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.649 [2024-07-23 06:29:24.769314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.649 [2024-07-23 06:29:24.769530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.649 [2024-07-23 06:29:24.769552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.649 [2024-07-23 06:29:24.769567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.649 [2024-07-23 06:29:24.772819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.649 Malloc0 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.649 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.649 [2024-07-23 06:29:24.782184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.649 [2024-07-23 06:29:24.782675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.649 [2024-07-23 06:29:24.782707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.650 [2024-07-23 06:29:24.782725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.650 [2024-07-23 06:29:24.782961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.650 [2024-07-23 06:29:24.783176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.650 [2024-07-23 06:29:24.783197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.650 [2024-07-23 06:29:24.783212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.650 [2024-07-23 06:29:24.786396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.650 [2024-07-23 06:29:24.795792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.650 [2024-07-23 06:29:24.796214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.650 [2024-07-23 06:29:24.796241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144ab50 with addr=10.0.0.2, port=4420 00:33:31.650 [2024-07-23 06:29:24.796258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab50 is same with the state(5) to be set 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.650 [2024-07-23 06:29:24.796474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144ab50 (9): Bad file descriptor 00:33:31.650 [2024-07-23 06:29:24.796703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.650 [2024-07-23 06:29:24.796724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.650 [2024-07-23 06:29:24.796738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.650 [2024-07-23 06:29:24.799996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.650 [2024-07-23 06:29:24.800045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.650 06:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1888908 00:33:31.650 [2024-07-23 06:29:24.809304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.650 [2024-07-23 06:29:24.969502] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:41.619 00:33:41.619 Latency(us) 00:33:41.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.619 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:41.619 Verification LBA range: start 0x0 length 0x4000 00:33:41.619 Nvme1n1 : 15.01 6834.62 26.70 9149.46 0.00 7984.02 843.47 23592.96 00:33:41.619 =================================================================================================================== 00:33:41.619 Total : 6834.62 26.70 9149.46 0.00 7984.02 843.47 23592.96 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:41.619 rmmod nvme_tcp 00:33:41.619 rmmod nvme_fabrics 00:33:41.619 rmmod nvme_keyring 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1889635 ']' 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1889635 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1889635 ']' 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1889635 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889635 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889635' 00:33:41.619 killing process with pid 1889635 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1889635 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1889635 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.619 06:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:43.528 00:33:43.528 real 0m22.174s 00:33:43.528 user 0m59.429s 00:33:43.528 sys 0m4.231s 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:43.528 ************************************ 00:33:43.528 END TEST nvmf_bdevperf 00:33:43.528 ************************************ 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.528 ************************************ 00:33:43.528 START TEST nvmf_target_disconnect 00:33:43.528 ************************************ 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:43.528 * Looking for test storage... 00:33:43.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.528 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:43.529 06:29:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:45.432 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.432 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:45.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:45.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:45.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:45.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:33:45.433 00:33:45.433 --- 10.0.0.2 ping statistics --- 00:33:45.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.433 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:33:45.433 00:33:45.433 --- 10.0.0.1 ping statistics --- 00:33:45.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.433 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:45.433 ************************************ 00:33:45.433 START TEST nvmf_target_disconnect_tc1 00:33:45.433 ************************************ 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:45.433 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.433 [2024-07-23 06:29:38.737285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.433 [2024-07-23 06:29:38.737353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17693e0 with addr=10.0.0.2, port=4420 00:33:45.433 [2024-07-23 06:29:38.737392] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:45.433 [2024-07-23 06:29:38.737418] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:45.433 [2024-07-23 06:29:38.737439] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:45.433 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:45.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:45.433 Initializing NVMe Controllers 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:45.433 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:45.434 00:33:45.434 real 0m0.088s 00:33:45.434 user 0m0.033s 00:33:45.434 sys 0m0.055s 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:45.434 ************************************ 00:33:45.434 END TEST nvmf_target_disconnect_tc1 00:33:45.434 ************************************ 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:45.434 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:45.692 ************************************ 00:33:45.692 START TEST nvmf_target_disconnect_tc2 00:33:45.692 ************************************ 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1892670 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1892670 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1892670 ']' 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:45.692 06:29:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.692 [2024-07-23 06:29:38.843692] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:45.692 [2024-07-23 06:29:38.843780] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.692 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.692 [2024-07-23 06:29:38.884292] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:45.692 [2024-07-23 06:29:38.911530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:45.692 [2024-07-23 06:29:38.998664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.692 [2024-07-23 06:29:38.998720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.692 [2024-07-23 06:29:38.998749] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.692 [2024-07-23 06:29:38.998761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.692 [2024-07-23 06:29:38.998771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.692 [2024-07-23 06:29:38.999081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:45.692 [2024-07-23 06:29:38.999143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:45.692 [2024-07-23 06:29:38.999185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:45.692 [2024-07-23 06:29:38.999188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.951 Malloc0 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.951 [2024-07-23 06:29:39.162534] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.951 [2024-07-23 06:29:39.190808] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1892807 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:45.951 06:29:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:45.951 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.524 06:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1892670 00:33:48.524 06:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 [2024-07-23 06:29:41.214897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Read completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.524 Write completed with error (sct=0, sc=8) 00:33:48.524 starting I/O failed 00:33:48.525 Write completed with error (sct=0, sc=8) 00:33:48.525 starting I/O failed 00:33:48.525 Read completed with error (sct=0, sc=8) 00:33:48.525 starting I/O failed 00:33:48.525 Write completed with error (sct=0, sc=8) 00:33:48.525 starting I/O failed 00:33:48.525 Read completed with error (sct=0, sc=8) 00:33:48.525 starting I/O failed 00:33:48.525 Write completed with error (sct=0, sc=8) 00:33:48.525 starting I/O failed 00:33:48.525 Read completed with error (sct=0, sc=8) 00:33:48.525 starting I/O failed 00:33:48.525 Read completed with error (sct=0, sc=8) 00:33:48.525 starting I/O failed 00:33:48.525 Read completed with error (sct=0, sc=8) 00:33:48.525 starting I/O failed 00:33:48.525 [2024-07-23 06:29:41.215232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:48.525 [2024-07-23 06:29:41.215464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.215494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.215684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.215711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.215892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.215926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.216137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.216163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.216314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.216340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.216515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.216541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.216700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.216727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.216879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.216905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.217110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.217136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.217291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.217317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.217500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.217526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.217694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.217721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.217876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.217901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.218207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.218236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.218484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.218535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.218752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.218778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.218938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.218963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.219147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.219174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.219327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.219352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.219499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.219545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.219739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.219767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.219943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.219970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.220162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.220188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.220337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.220363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.220540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.220566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.220759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.220785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.220940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.220965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.221165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.221191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.221380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.221409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.221580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.221606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.221769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.221799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.221998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.222023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.222190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.222215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.525 [2024-07-23 06:29:41.222418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.525 [2024-07-23 06:29:41.222444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.525 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.222595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.222627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.222782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.222808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.222963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.222990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.223194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.223220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.223455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.223483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.223713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.223741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.223893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.223920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.224142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.224168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.224319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.224345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.224511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.224556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.224777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.224803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.224953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.224978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.225168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.225197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.225410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.225439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.225611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.225641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.225821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.225847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.226028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.226058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.226265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.226292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.226514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.226543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.226755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.226781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.226936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.226962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.227139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.227166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.227358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.227386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.227564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.227591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Write completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 Read completed with error (sct=0, sc=8) 00:33:48.526 starting I/O failed 00:33:48.526 [2024-07-23 06:29:41.227921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.526 [2024-07-23 06:29:41.228155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.228198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.228428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.228455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.526 qpair failed and we were unable to recover it. 00:33:48.526 [2024-07-23 06:29:41.228631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.526 [2024-07-23 06:29:41.228658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.228813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.228838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.229017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.229043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.229202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.229228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.229385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.229426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.229630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.229655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.229807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.229832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.230000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.230025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.230195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.230220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.230417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.230442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.230588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.230622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.230775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.230800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.230937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.230962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.231152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.231180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.231379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.231404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.231556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.231581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.231723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.231749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.231896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.231929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.232086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.232112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.232282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.232310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.232483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.232509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.232660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.232686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.232840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.232865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.233047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.233071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.233267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.233293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.233498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.233527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.233701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.233726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.233878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.233912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.234079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.234104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.234269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.234294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.234438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.234463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.234666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.234692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.234846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.234871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.235038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.235063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.235208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.235249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.235423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.235448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.235597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.235645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.235815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.235840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.527 [2024-07-23 06:29:41.236023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.527 [2024-07-23 06:29:41.236048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.527 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.236226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.236251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.236426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.236451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.236604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.236636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.236795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.236820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.236993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.237018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.237186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.237215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.237384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.237409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.237577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.237602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.237825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.237851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.238028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.238054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.238204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.238229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.238427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.238452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.238628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.238654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.238830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.238855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.239035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.239059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.239229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.239253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.239423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.239470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.239663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.239688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.239867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.239892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.240048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.240073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.240243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.240267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.240457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.240485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.240682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.240708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.240881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.240906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.241056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.241081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.241227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.241251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.241450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.241474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.241627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.241652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.241826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.241852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.242038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.242063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.242207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.242232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.242390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.242429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.242590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.242632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.242834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.242858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.243066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.243094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.243258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.243284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.243460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.243485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.528 [2024-07-23 06:29:41.243691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.528 [2024-07-23 06:29:41.243717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.528 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.243869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.243894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.244095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.244120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.244290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.244315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.244513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.244539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.244715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.244740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.244908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.244933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.245099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.245123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.245266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.245306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.245496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.245524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.245722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.245751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.245901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.245926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.246104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.246145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.246338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.246363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.246583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.246611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.246798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.246823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.247005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.247030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.247247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.247274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.247469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.247493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.247665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.247691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.247843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.247867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.248040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.248067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.248266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.248292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.248494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.248519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.248685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.248711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.248863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.248888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.249086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.249111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.249303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.249332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.249525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.249550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.249736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.529 [2024-07-23 06:29:41.249762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.529 qpair failed and we were unable to recover it. 00:33:48.529 [2024-07-23 06:29:41.249937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.249962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.250109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.250133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.250304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.250329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.250513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.250538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.250713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.250738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.250916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.250941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.251145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.251171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.251376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.251401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.251573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.251598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.251754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.251797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.251991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.252016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.252191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.252216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.252426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.252451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.252625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.252650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.252818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.252859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.253064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.253089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.253263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.253288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.253439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.253466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.253633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.253659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.253835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.253863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.254061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.254089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.254375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.254429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.254631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.254657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.254823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.254851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.255056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.255081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.255285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.255310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.255546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.255571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.255762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.255789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.255944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.255969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.256126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.256151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.256343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.256370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.256543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.256568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.256767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.256795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.256990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.257019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.257218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.257242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.257441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.257466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.257651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.257680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.257880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.530 [2024-07-23 06:29:41.257905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.530 qpair failed and we were unable to recover it. 00:33:48.530 [2024-07-23 06:29:41.258051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.258077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.258253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.258279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.258458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.258483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.258685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.258714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.258877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.258905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.259077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.259101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.259294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.259321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.259513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.259540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.259745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.259771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.259981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.260009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.260217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.260242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.260443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.260468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.260651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.260676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.260869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.260897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.261121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.261146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.261292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.261317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.261499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.261527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.261725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.261750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.261895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.261920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.262096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.262123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.262311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.262335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.262506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.262534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.262730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.262762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.262930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.262956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.263092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.263117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.263291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.263316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.263497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.263523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.263729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.263758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.263969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.263994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.264136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.264162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.264391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.264419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.264651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.264677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.264830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.264856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.265004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.265029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.265174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.265215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.265423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.265448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.265625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.265651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.265805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.531 [2024-07-23 06:29:41.265830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.531 qpair failed and we were unable to recover it. 00:33:48.531 [2024-07-23 06:29:41.266004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.266029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.266221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.266249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.266413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.266440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.266616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.266642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.266825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.266852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.267016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.267044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.267267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.267292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.267489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.267517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.267702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.267730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.267907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.267932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.268102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.268127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.268339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.268365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.268579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.268604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.268772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.268800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.269021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.269046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.269194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.269218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.269430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.269458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.269644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.269673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.269842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.269868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.270011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.270036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.270181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.270220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.270441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.270465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.270617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.270643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.270788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.270813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.270983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.271008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.271159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.271184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.271357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.271382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.271522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.271546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.271747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.271772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.271941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.271969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.272192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.272216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.272390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.272416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.272597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.272628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.272831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.272856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.273057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.273085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.273312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.273337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.273514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.273539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.273689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.273715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.532 qpair failed and we were unable to recover it. 00:33:48.532 [2024-07-23 06:29:41.273888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.532 [2024-07-23 06:29:41.273913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.274089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.274114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.274266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.274291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.274468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.274494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.274665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.274690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.274869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.274894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.275037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.275062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.275206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.275231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.275448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.275475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.275641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.275668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.275814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.275840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.275993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.276036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.276190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.276218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.276420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.276445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.276643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.276672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.276816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.276841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.277037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.277062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.277205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.277230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.277397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.277421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.277592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.277623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.277795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.277824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.278004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.278029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.278201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.278226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.278459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.278487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.278680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.278708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.278879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.278904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.279049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.279074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.279246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.279272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.279449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.279474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.279695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.279723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.279892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.279919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.280093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.280118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.533 qpair failed and we were unable to recover it. 00:33:48.533 [2024-07-23 06:29:41.280294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.533 [2024-07-23 06:29:41.280319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.280507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.280532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.280706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.280731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.280908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.280933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.281104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.281133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.281307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.281332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.281525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.281552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.281744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.281773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.281939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.281964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.282136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.282183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.282361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.282386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.282558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.282584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.282765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.282791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.282984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.283011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.283207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.283232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.283430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.283458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.283628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.283656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.283886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.283911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.284090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.284114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.284262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.284287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.284458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.284483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.284678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.284707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.284895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.284923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.285120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.285145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.285320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.285346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.285560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.285588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.285795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.285820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.285963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.285988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.286136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.286161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.286308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.286333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.286502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.286527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.286727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.286755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.286952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.286977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.287181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.287209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.287426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.287454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.287658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.287683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.287854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.287886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.288116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.288141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.534 [2024-07-23 06:29:41.288341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.534 [2024-07-23 06:29:41.288365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.534 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.288512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.288537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.288682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.288727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.288927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.288952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.289121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.289146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.289316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.289341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.289486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.289511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.289691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.289718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.289878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.289907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.290078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.290103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.290291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.290319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.290512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.290537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.290714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.290739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.290937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.290965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.291160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.291184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.291356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.291380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.291576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.291604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.291769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.291797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.292020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.292046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.292208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.292238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.292466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.292494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.292700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.292725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.292925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.292953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.293122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.293149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.293347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.293372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.293563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.293590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.293803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.293829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.294002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.294028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.294235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.294260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.294481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.294509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.294738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.294763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.294956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.294984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.295177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.295205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.295398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.295423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.295597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.295626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.295849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.295877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.296039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.296064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.296263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.296291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.535 [2024-07-23 06:29:41.296480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.535 [2024-07-23 06:29:41.296510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.535 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.296712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.296739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.296937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.296966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.297148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.297176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.297362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.297387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.297532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.297557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.297734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.297760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.297956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.297982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.298180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.298208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.298426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.298454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.298658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.298684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.298858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.298883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.299068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.299096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.299285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.299310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.299513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.299540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.299718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.299747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.299943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.299968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.300167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.300195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.300391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.300419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.300646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.300672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.300867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.300895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.301093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.301121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.301294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.301319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.301479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.301507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.301726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.301755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.301927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.301953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.302099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.302141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.302337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.302365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.302586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.302621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.302869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.302899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.303093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.303122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.303324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.303349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.303511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.303539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.303745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.303771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.303945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.303970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.304145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.304171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.304366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.304395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.304592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.304632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.536 [2024-07-23 06:29:41.304792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.536 [2024-07-23 06:29:41.304817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.536 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.305007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.305035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.305224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.305250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.305436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.305461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.305669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.305699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.305889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.305913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.306087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.306112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.306279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.306304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.306474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.306500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.306646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.306672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.306860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.306888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.307083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.307108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.307274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.307302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.307528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.307553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.307731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.307757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.307936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.307962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.308136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.308164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.308353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.308382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.308581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.308606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.308802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.308830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.309030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.309055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.309222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.309250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.309438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.309466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.309694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.309719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.309893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.309921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.310137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.310165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.310383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.310409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.310619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.310644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.310862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.310890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.311116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.311141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.311330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.311357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.311581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.311609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.311796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.311822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.311998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.312023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.312224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.312252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.312470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.312495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.312693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.312721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.312913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.312943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.313142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.537 [2024-07-23 06:29:41.313167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.537 qpair failed and we were unable to recover it. 00:33:48.537 [2024-07-23 06:29:41.313339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.313364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.313549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.313577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.313779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.313806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.314025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.314053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.314244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.314272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.314441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.314467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.314641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.314670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.314864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.314892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.315112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.315137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.315355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.315381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.315551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.315576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.315731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.315757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.315954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.315982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.316172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.316200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.316423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.316448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.316671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.316700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.316891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.316920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.317115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.317141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.317340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.317369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.317537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.317565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.317795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.317820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.317993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.318021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.318212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.318241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.318408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.318433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.318598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.318648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.318875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.318903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.319100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.319126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.319301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.319326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.319479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.319507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.319678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.319704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.319926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.319954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.320148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.320176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.538 [2024-07-23 06:29:41.320340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.538 [2024-07-23 06:29:41.320365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.538 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.320638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.320666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.320868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.320893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.321031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.321057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.321206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.321230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.321417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.321446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.321627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.321653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.321816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.321843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.322042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.322067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.322237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.322263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.322481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.322509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.322736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.322762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.322942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.322968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.323129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.323157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.323361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.323394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.323588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.323617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.323820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.323848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.324037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.324065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.324257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.324282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.324437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.324462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.324662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.324688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.324873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.324899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.325066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.325094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.325289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.325318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.325520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.325546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.325755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.325786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.325955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.325983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.326158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.326183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.326376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.326405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.326625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.326654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.326858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.326882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.327084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.327112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.327266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.327294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.327481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.327506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.327660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.327686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.327855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.327880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.328024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.539 [2024-07-23 06:29:41.328050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.539 qpair failed and we were unable to recover it. 00:33:48.539 [2024-07-23 06:29:41.328247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.328275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.328452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.328477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.328663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.328690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.328892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.328919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.329103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.329138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.329338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.329363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.329573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.329598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.329804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.329833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.330061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.330086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.330293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.330318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.330536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.330564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.330771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.330796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.330957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.330985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.331172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.331199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.331391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.331417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.331643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.331672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.331832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.331861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.332056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.332081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.332246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.332274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.332450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.332475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.332631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.332656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.332853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.332882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.333037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.333065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.333234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.333259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.333402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.333427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.333604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.333635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.333817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.333842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.334027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.334052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.334243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.334272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.334494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.334519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.334747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.334775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.334962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.334994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.335191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.335216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.335362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.335387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.335587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.335616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.335829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.335856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.336049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.336077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.336273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.540 [2024-07-23 06:29:41.336301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.540 qpair failed and we were unable to recover it. 00:33:48.540 [2024-07-23 06:29:41.336469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.336495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.336683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.336711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.336902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.336931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.337145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.337170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.337366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.337394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.337578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.337606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.337836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.337861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.338067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.338095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.338289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.338314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.338508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.338532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.338730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.338759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.338953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.338982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.339176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.339202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.339369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.339394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.339594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.339629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.339804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.339829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.339981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.340007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.340211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.340236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.340446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.340472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.340638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.340663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.340838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.340863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.341063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.341088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.341271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.341299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.341504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.341529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.341704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.341729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.341925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.341953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.342112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.342140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.342358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.342384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.342555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.342583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.342806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.342831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.343031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.343056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.343234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.343259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.343467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.343495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.343722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.343749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.343939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.343968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.344185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.344213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.344377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.344402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.541 [2024-07-23 06:29:41.344567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.541 [2024-07-23 06:29:41.344592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.541 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.344745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.344771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.344971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.344996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.345222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.345250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.345444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.345473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.345645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.345671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.345900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.345928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.346123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.346152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.346345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.346370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.346545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.346570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.346773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.346799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.346945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.346971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.347119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.347144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.347316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.347342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.347516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.347542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.347693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.347719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.347919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.347946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.348165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.348190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.348383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.348410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.348603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.348640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.348838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.348866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.349040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.349065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.349254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.349281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.349501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.349527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.349691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.349724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.349915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.349943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.350133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.350159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.350355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.350384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.350577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.350604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.350780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.350806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.351011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.351040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.351240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.351265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.351435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.351460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.351655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.351683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.351861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.351886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.352060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.352085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.352286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.352313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.352507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.352534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.352707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.542 [2024-07-23 06:29:41.352733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.542 qpair failed and we were unable to recover it. 00:33:48.542 [2024-07-23 06:29:41.352955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.352984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.353182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.353210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.353405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.353429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.353643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.353668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.353842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.353868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.354039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.354064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.354233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.354258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.354459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.354487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.354660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.354686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.354834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.354875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.355079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.355104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.355288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.355313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.355480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.355509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.355716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.355744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.355928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.355953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.356174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.356201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.356393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.356420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.356630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.356656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.356851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.356876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.357090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.357118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.357312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.357337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.357487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.357512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.357673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.357699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.357886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.357912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.358118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.358143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.358293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.358334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.358542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.358567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.358778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.358806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.359033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.359058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.359229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.359254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.359472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.359500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.359697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.359723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.543 qpair failed and we were unable to recover it. 00:33:48.543 [2024-07-23 06:29:41.359899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.543 [2024-07-23 06:29:41.359924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.360136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.360161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.360350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.360378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.360551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.360576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.360767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.360796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.360987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.361014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.361233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.361258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.361412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.361437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.361595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.361643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.361864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.361889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.362078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.362106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.362295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.362322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.362546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.362572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.362751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.362777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.362954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.362980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.363144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.363169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.363394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.363422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.363617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.363647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.363846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.363871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.364038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.364066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.364266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.364292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Write completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Write completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Write completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Write completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Write completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Write completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Write completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Write completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 Read completed with error (sct=0, sc=8) 00:33:48.544 starting I/O failed 00:33:48.544 [2024-07-23 06:29:41.364597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:48.544 [2024-07-23 06:29:41.364797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.364836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.365055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.365083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.365277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.365306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.365521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.365549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.365738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.365765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.544 [2024-07-23 06:29:41.365966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.544 [2024-07-23 06:29:41.365997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.544 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.366224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.366250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.366458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.366487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.366697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.366723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.366896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.366922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.367066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.367108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.367323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.367352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.367518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.367546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.367744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.367771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.367993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.368022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.368380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.368430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.368625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.368670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.368844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.368870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.369084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.369110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.369279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.369307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.369505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.369533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.369733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.369760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.369982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.370011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.370230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.370282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.370502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.370531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.370712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.370739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.370939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.370968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.371195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.371221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.371395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.371424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.371630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.371659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.371869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.371895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.372071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.372100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.372262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.372292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.372512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.372545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.372722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.372748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.372947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.372975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.373204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.373229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.373403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.373432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.373603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.373638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.373816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.373841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.374061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.374089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.374357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.374408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.374621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.545 [2024-07-23 06:29:41.374647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.545 qpair failed and we were unable to recover it. 00:33:48.545 [2024-07-23 06:29:41.374850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.374876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.375046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.375074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.375399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.375445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.375603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.375662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.375847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.375872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.376079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.376106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.376306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.376335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.376521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.376549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.376751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.376777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.376949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.376974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.377191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.377220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.377519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.377579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.377810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.377836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.378029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.378058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.378290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.378316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.378483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.378511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.378713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.378742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.378945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.378971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.379194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.379222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.379453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.379479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.379654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.379682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.379915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.379944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.380162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.380188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.380360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.380386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.380580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.380610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.380848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.380877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.381074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.381099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.381250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.381276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.381422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.381448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.381629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.381657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.381859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.381893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.382095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.382121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.382327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.382353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.382555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.382586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.382784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.382814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.383013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.383039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.383208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.383234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.383435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.546 [2024-07-23 06:29:41.383464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.546 qpair failed and we were unable to recover it. 00:33:48.546 [2024-07-23 06:29:41.383657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.383685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.383880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.383909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.384108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.384134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.384278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.384303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.384507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.384536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.384727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.384756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.384927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.384953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.385101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.385145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.385334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.385363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.385583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.385609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.385784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.385813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.386004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.386032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.386253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.386279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.386464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.386493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.386651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.386680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.386850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.386876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.387026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.387052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.387232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.387258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.387465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.387491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.387686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.387725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.387952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.387981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.388148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.388174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.388331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.388356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.388551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.388579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.388811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.388837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.388993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.389018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.389161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.389203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.389389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.389415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.389603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.389638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.389851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.389879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.390103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.390128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.547 [2024-07-23 06:29:41.390407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.547 [2024-07-23 06:29:41.390465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.547 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.390636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.390671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.390871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.390896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.391053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.391078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.391221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.391246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.391391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.391416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.391621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.391653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.391852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.391880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.392080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.392105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.392260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.392285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.392437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.392462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.392641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.392667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.392833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.392861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.393054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.393082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.393250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.393275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.393426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.393452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.393600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.393631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.393801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.393827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.394018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.394045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.394241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.394266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.394438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.394463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.394610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.394641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.394786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.394827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.395025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.395051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.395205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.395231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.395429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.395457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.395649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.395675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.395869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.395897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.396104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.396132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.396326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.396351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.396522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.396548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.396739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.396768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.396963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.396989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.397143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.397169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.397366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.397394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.397582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.397608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.397788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.397816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.398006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.398034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.548 qpair failed and we were unable to recover it. 00:33:48.548 [2024-07-23 06:29:41.398224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.548 [2024-07-23 06:29:41.398249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.398461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.398510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.398689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.398716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.398925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.398954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.399131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.399161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.399351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.399379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.399606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.399636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.399823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.399852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.400033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.400061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.400279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.400304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.400502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.400530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.400751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.400780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.400974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.400999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.401193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.401221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.401392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.401420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.401624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.401649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.401845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.401874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.402059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.402087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.402282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.402307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.402499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.402528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.402691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.402720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.402916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.402942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.403161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.403191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.403378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.403406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.403604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.403635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.403832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.403860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.404080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.404107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.404303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.404329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.404525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.404553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.404766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.404792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.404994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.405019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.405185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.405213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.405409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.405435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.405610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.405640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.405846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.405875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.406092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.406120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.406313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.406339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.406516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.549 [2024-07-23 06:29:41.406541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.549 qpair failed and we were unable to recover it. 00:33:48.549 [2024-07-23 06:29:41.406736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.406765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.406991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.407017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.407180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.407208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.407425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.407453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.407660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.407686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.407881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.407912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.408135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.408161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.408363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.408388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.408584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.408628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.408832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.408858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.409030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.409055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.409270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.409298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.409490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.409518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.409676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.409702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.409891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.409918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.410107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.410135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.410357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.410383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.410559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.410584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.410742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.410769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.410926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.410952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.411143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.411171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.411355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.411383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.411580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.411605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.411768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.411796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.411994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.412020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.412187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.412213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.412358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.412384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.412605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.412638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.412807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.412832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.413004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.413029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.413196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.413221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.413397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.413422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.413621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.413650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.413848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.413873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.414020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.414046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.414226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.414252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.414474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.414501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.550 [2024-07-23 06:29:41.414674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.550 [2024-07-23 06:29:41.414700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.550 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.414919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.414947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.415105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.415134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.415354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.415379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.415554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.415582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.415781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.415807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.415954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.415979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.416127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.416153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.416328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.416358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.416603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.416640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.416876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.416901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.417119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.417146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.417344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.417369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.417570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.417598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.417802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.417830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.417992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.418018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.418169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.418195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.418410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.418438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.418605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.418635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.418805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.418833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.419053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.419082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.419277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.419301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.419475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.419503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.419697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.419726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.419948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.419973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.420118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.420144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.420297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.420339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.420528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.420553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.420749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.420777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.420977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.421005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.421167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.421193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.421357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.421385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.421574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.421603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.421773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.421798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.421999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.422027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.422181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.422210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.422372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.422398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.422627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.422656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.551 [2024-07-23 06:29:41.422820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.551 [2024-07-23 06:29:41.422849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.551 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.423040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.423066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.423265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.423293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.423465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.423491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.423641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.423668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.423898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.423926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.424090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.424118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.424313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.424339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.424576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.424603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.424846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.424875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.425039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.425068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.425293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.425321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.425493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.425521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.425709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.425737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.425934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.425960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.426151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.426178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.426375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.426401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.426625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.426653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.426861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.426887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.427085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.427112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.427305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.427330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.427473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.427498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.427673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.427698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.427843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.427869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.428066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.428094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.428285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.428313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.428507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.428532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.428690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.428718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.428905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.428934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.429158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.429186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.429405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.429430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.429624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.552 [2024-07-23 06:29:41.429652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.552 qpair failed and we were unable to recover it. 00:33:48.552 [2024-07-23 06:29:41.429871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.429899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.430093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.430122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.430347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.430373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.430540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.430568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.430766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.430795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.431017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.431055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.431237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.431263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.431534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.431583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.431785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.431811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.431985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.432010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.432181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.432206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.432411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.432461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.432653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.432696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.432871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.432895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.433130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.433155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.433379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.433429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.433646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.433688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.433867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.433891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.434105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.434130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.434333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.434360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.434551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.434579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.434785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.434810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.434963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.434988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.435180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.435208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.435432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.435457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.435654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.435698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.435846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.435871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.436047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.436072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.436211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.436236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.436573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.436654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.436856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.436883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.437032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.437060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.437242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.437273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.437450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.437475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.437682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.437709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.437865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.437890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.438035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.438061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.438238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.553 [2024-07-23 06:29:41.438263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.553 qpair failed and we were unable to recover it. 00:33:48.553 [2024-07-23 06:29:41.438442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.438468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.438687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.438714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.438865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.438890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.439087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.439115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.439316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.439341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.439562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.439590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.439800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.439826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.440003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.440031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.440202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.440228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.440406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.440431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.440637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.440682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.440826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.440852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.441005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.441030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.441200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.441228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.441437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.441463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.441711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.441737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.441942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.441967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.442140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.442168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.442359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.442386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.442534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.442560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.442736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.442761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.442913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.442939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.443177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.443206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.443472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.443525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.443729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.443755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.443940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.443965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.444133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.444160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.444324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.444352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.444525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.444550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.444748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.444774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.445002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.445030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.445296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.445324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.445520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.445546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.445695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.445722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.445909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.445941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.446132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.446160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.446347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.446372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.446595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.554 [2024-07-23 06:29:41.446629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.554 qpair failed and we were unable to recover it. 00:33:48.554 [2024-07-23 06:29:41.446839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.446864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.447056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.447084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.447268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.447294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.447487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.447516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.447715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.447741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.447891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.447917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.448068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.448093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.448346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.448397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.448597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.448641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.448814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.448839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.449026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.449051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.449199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.449226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.449403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.449432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.449629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.449656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.449843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.449868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.450062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.450090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.450283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.450308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.450481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.450506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.450682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.450708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.450881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.450924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.451114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.451142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.451446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.451500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.451700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.451726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.451899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.451927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.452084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.452110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.452300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.452328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.452530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.452555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.452772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.452798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.453002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.453031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.453226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.453251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.453399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.453424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.453625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.453653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.453844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.453871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.454062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.454089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.454269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.454294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.454458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.454486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.454695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.454725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.555 [2024-07-23 06:29:41.454904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.555 [2024-07-23 06:29:41.454946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.555 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.455119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.455144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.455339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.455367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.455559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.455587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.455815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.455840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.456001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.456027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.456225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.456251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.456449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.456476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.456664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.456706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.456886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.456912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.457132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.457160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.457359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.457386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.457577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.457630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.457819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.457845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.458001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.458025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.458248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.458276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.458469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.458498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.458697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.458722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.458923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.458951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.459139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.459167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.459355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.459383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.459576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.459601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.459786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.459811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.459983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.460008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.460199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.460227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.460397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.460423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.460604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.460644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.460810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.460840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.461034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.461062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.461262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.461288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.461506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.461535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.461734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.461760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.461996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.462024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.462183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.462208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.462361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.462403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.462594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.462628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.462804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.462832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.463028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.463053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.556 [2024-07-23 06:29:41.463219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.556 [2024-07-23 06:29:41.463247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.556 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.463466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.463495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.463669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.463697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.463892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.463918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.464073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.464099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.464300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.464328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.464537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.464562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.464718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.464745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.464916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.464942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.465167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.465195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.465404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.465430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.465578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.465603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.465760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.465786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.466008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.466036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.466230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.466258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.466461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.466488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.466678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.466707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.466906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.466935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.467137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.467163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.467370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.467395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.467599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.467630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.467810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.467836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.468021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.468047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.468202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.468228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.468451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.468479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.468665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.468691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.468890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.468919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.469147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.469172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.469338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.469367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.469559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.469587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.469804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.469831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.470000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.557 [2024-07-23 06:29:41.470025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.557 qpair failed and we were unable to recover it. 00:33:48.557 [2024-07-23 06:29:41.470256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.470284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.470517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.470542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.470731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.470759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.470951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.470977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.471200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.471228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.471419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.471446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.471639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.471669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.471864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.471889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.472111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.472139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.472347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.472376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.472555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.472580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.472746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.472771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.472941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.472966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.473165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.473190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.473367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.473394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.473620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.473646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.473857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.473885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.474087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.474112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.474252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.474277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.474453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.474478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.474682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.474711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.474923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.474951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.475133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.475160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.475354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.475381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.475570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.475598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.475783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.475811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.476002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.476030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.476222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.476247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.476410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.476438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.476632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.476661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.476857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.476885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.477073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.477099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.477303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.477331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.477561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.477588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.477794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.477820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.477991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.478017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.478196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.558 [2024-07-23 06:29:41.478224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.558 qpair failed and we were unable to recover it. 00:33:48.558 [2024-07-23 06:29:41.478456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.478481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.478651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.478679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.478873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.478898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.479095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.479122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.479321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.479348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.479543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.479571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.479769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.479794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.479963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.479992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.480152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.480181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.480376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.480404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.480566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.480591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.480810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.480836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.481029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.481061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.481277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.481303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.481468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.481493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.481637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.481664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.481841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.481867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.482093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.482121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.482315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.482340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.482541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.482569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.482748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.482777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.482996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.483022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.483199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.483224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.483395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.483424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.483626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.483655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.483846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.483874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.484069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.484094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.484291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.484319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.484512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.484540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.484732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.484761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.484944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.484970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.485123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.485148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.485316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.485341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.485516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.485542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.485718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.485744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.485931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.485960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.486177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.486205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.486429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.559 [2024-07-23 06:29:41.486455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.559 qpair failed and we were unable to recover it. 00:33:48.559 [2024-07-23 06:29:41.486635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.486661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.486808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8470 is same with the state(5) to be set 00:33:48.560 [2024-07-23 06:29:41.487018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.487061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.487304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.487358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.487525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.487553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.487785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.487815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.488027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.488054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.488206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.488232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.488405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.488431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.488571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.488598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.488751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.488778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.488973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.489002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.489163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.489192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.489386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.489412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.489608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.489647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.489848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.489877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.490103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.490129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.490364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.490429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.490649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.490681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.490882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.490908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.491078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.491107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.491391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.491443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.491626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.491661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.491824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.491852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.492039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.492068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.492257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.492282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.492529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.492577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.492766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.492792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.492935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.492967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.493122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.493165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.493462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.493513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.493704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.493731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.493876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.493921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.494110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.494138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.494340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.494365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.494545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.494570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.494749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.494776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.494944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.494970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.560 qpair failed and we were unable to recover it. 00:33:48.560 [2024-07-23 06:29:41.495251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.560 [2024-07-23 06:29:41.495301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.495497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.495526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.495695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.495721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.495872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.495915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.496125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.496150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.496321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.496348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.496517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.496543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.496713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.496739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.496911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.496936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.497157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.497209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.497378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.497407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.497602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.497633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.497842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.497867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.498102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.498131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.498291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.498316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.498508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.498537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.498715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.498741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.498894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.498919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.499197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.499246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.499447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.499472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.499649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.499677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.499844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.499870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.500075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.500102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.500298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.500324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.500523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.500553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.500728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.500754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.500910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.500935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.501104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.501129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.501354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.501382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.501562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.501587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.501768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.501801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.502022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.502048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.561 [2024-07-23 06:29:41.502220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.561 [2024-07-23 06:29:41.502245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.561 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.502412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.502440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.502643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.502673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.502866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.502892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.503037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.503064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.503216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.503243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.503449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.503474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.503678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.503706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.503895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.503923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.504111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.504137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.504314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.504340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.504564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.504593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.504767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.504792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.504956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.504984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.505177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.505205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.505371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.505396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.505558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.505584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.505761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.505787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.505937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.505963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.506106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.506131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.506281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.506323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.506515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.506541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.506743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.506772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.506957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.506985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.507182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.507207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.507404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.507432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.507592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.507625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.507820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.507845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.508003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.508028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.508184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.508209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.508384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.508409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.508640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.508669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.508863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.508891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.509123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.509149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.562 [2024-07-23 06:29:41.509343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.562 [2024-07-23 06:29:41.509371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.562 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.509587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.509622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.509834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.509860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.510090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.510118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.510307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.510340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.510541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.510567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.510740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.510773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.510999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.511027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.511244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.511269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.511440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.511468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.511662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.511691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.511910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.511935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.512127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.512155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.512356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.512384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.512623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.512649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.512849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.512876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.513107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.513136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.513325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.513350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.513533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.513558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.513732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.513758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.513932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.513957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.514179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.514206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.514421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.514449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.514721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.514746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.514937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.514965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.515178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.515206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.515374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.515399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.515634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.515662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.515829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.515857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.516030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.516055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.516242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.516270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.516469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.516494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.516664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.516690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.516903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.516931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.517131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.517160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.517359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.517385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.517577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.517605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.517809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.517839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.563 qpair failed and we were unable to recover it. 00:33:48.563 [2024-07-23 06:29:41.518006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.563 [2024-07-23 06:29:41.518031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.518207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.518233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.518423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.518451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.518651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.518677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.518899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.518926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.519106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.519131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.519327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.519356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.519565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.519590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.519811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.519836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.519986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.520013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.520243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.520271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.520455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.520483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.520675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.520701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.520901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.520929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.521114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.521142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.521336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.521361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.521555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.521583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.521775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.521801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.521996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.522022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.522247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.522275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.522508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.522536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.522762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.522792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.523015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.523069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.523283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.523311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.523500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.523525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.523733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.523761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.523950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.523979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.524172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.524197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.524485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.524537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.524734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.524764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.524959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.524985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.525155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.525183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.525363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.525392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.525593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.525624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.525807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.525833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.564 qpair failed and we were unable to recover it. 00:33:48.564 [2024-07-23 06:29:41.526074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.564 [2024-07-23 06:29:41.526102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.526295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.526329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.526526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.526556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.526758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.526784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.526935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.526960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.527150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.527178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.527376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.527404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.527606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.527637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.527787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.527813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.527985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.528011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.528167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.528192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.528378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.528435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.528599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.528634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.528809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.528835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.529029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.529057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.529280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.529308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.529524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.529550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.529726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.529755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.529941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.529969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.530180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.530206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.530381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.530409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.530629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.530657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.530833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.530859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.531003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.531028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.531227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.531252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.531475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.531501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.531726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.531754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.531921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.531949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.532174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.532200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.532347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.532373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.532521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.532547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.532745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.532772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.532972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.533000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.533186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.533215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.533408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.533433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.533622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.533660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.533856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.533881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.565 [2024-07-23 06:29:41.534090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.565 [2024-07-23 06:29:41.534115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.565 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.534319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.534347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.534545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.534572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.534768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.534794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.534992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.535020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.535221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.535246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.535449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.535474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.535752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.535782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.536002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.536029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.536211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.536237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.536434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.536464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.536658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.536685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.536859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.536886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.537110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.537138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.537303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.537337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.537556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.537582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.537764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.537793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.537977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.538005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.538195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.538221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.538409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.538438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.538630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.538659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.538878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.538903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.539125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.539153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.539336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.539364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.539564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.539589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.539775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.539803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.539994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.540022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.566 [2024-07-23 06:29:41.540213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.566 [2024-07-23 06:29:41.540238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.566 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.540415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.540444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.540662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.540688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.540836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.540863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.541071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.541097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.541275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.541301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.541471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.541497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.541699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.541728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.541893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.541920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.542091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.542116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.542262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.542287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.542480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.542505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.542683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.542710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.542885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.542910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.543112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.543140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.543360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.543385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.543533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.543560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.543736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.543762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.543940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.543965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.544138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.544167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.544329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.544359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.544554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.544580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.544763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.544792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.544993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.545020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.545202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.545228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.545395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.545423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.545611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.545646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.545816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.545845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.546083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.546110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.546301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.546330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.546496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.546521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.546756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.546786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.546959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.546985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.547160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.547185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.547380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.547408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.547627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.547663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.547835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.547860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.548070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.548098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.567 [2024-07-23 06:29:41.548264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.567 [2024-07-23 06:29:41.548290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.567 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.548492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.548518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.548722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.548751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.548929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.548955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.549131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.549157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.549317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.549342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.549564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.549592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.549776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.549802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.550002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.550027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.550233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.550261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.550449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.550475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.550653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.550682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.550895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.550920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.551104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.551129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.551323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.551351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.551545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.551573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.551784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.551810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.551972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.552000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.552187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.552215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.552441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.552466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.552651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.552677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.552843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.552872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.553099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.553124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.553270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.553296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.553449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.553474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.553681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.553707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.553891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.553919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.554122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.554147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.554297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.554322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.554458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.554488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.554680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.554709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.554894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.554920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.555111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.555140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.555325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.555353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.555546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.568 [2024-07-23 06:29:41.555571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.568 qpair failed and we were unable to recover it. 00:33:48.568 [2024-07-23 06:29:41.555753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.555782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.555965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.555994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.556199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.556224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.556377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.556403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.556595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.556632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.556856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.556881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.557107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.557135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.557332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.557357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.557562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.557587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.557794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.557824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.558042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.558070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.558287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.558312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.558531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.558559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.558731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.558757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.558930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.558955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.559148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.559177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.559394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.559422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.559626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.559652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.559851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.559878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.560105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.560133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.560330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.560356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.560549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.560577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.560812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.560840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.561033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.561058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.561282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.561311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.561509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.561537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.561729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.561754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.561925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.561953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.562167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.562195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.562374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.562399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.562593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.562632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.562810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.562836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.562985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.563010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.563211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.563239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.563407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.563441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.563636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.563662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.563848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.563876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.564071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.564099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.564272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.564298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.569 [2024-07-23 06:29:41.564470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.569 [2024-07-23 06:29:41.564496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.569 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.564679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.564705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.564893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.564919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.565109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.565138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.565360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.565388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.565584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.565609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.565821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.565849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.566038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.566066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.566288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.566314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.566477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.566502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.566671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.566706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.566850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.566875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.567073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.567101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.567255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.567283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.567482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.567508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.567727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.567757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.567927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.567955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.568155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.568181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.568375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.568403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.568631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.568657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.568856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.568882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.569079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.569107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.569268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.569295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.569489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.569515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.569716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.569744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.569938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.569966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.570182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.570208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.570434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.570462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.570688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.570717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.570891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.570917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.571091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.571117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.571317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.571346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.571573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.571598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.571749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.571774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.571968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.571997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.572224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.572278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.572461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.572489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.572690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.572717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.572895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.572921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.573084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.573111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.573343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.573369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.570 [2024-07-23 06:29:41.573546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.570 [2024-07-23 06:29:41.573572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.570 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.573752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.573781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.573972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.573999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.574181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.574207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.574375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.574403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.574591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.574626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.574832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.574857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.575062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.575090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.575349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.575402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.575602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.575638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.575852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.575881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.576070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.576099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.576319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.576344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.576544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.576572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.576764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.576790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.576942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.576967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.577194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.577222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.577450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.577475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.577655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.577682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.577898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.577926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.578093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.578120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.578303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.578328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.578478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.578505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.578730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.578759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.578961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.578986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.579157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.579185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.579377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.579406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.579604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.579636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.579831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.579859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.580020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.580048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.580215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.580240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.580391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.580416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.580623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.580652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.580878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.571 [2024-07-23 06:29:41.580903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.571 qpair failed and we were unable to recover it. 00:33:48.571 [2024-07-23 06:29:41.581051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.581081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.581275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.581303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.581493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.581519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.581717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.581745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.581914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.581941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.582107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.582132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.582282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.582308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.582499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.582526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.582704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.582730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.582903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.582928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.583136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.583164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.583360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.583385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.583577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.583605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.583813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.583841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.584050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.584076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.584294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.584322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.584528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.584554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.584762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.584788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.584985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.585014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.585204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.585232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.585428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.585453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.585621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.585650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.585842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.585869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.586084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.586110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.586335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.586363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.586583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.586608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.586768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.586793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.572 [2024-07-23 06:29:41.586948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.572 [2024-07-23 06:29:41.586974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.572 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.587195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.587223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.587386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.587411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.587632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.587661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.587839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.587869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.588088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.588114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.588285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.588312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.588479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.588507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.588730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.588757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.588959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.588987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.589174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.589202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.589403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.589429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.589657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.589686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.589848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.589876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.590081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.590106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.590297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.590325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.590519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.590548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.590716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.590742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.590911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.590939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.591136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.591164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.591382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.591407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.591607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.591651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.591870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.591898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.592096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.592121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.592314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.592343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.592528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.592556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.592747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.592773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.592954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.592980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.593153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.593178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.593377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.593403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.593605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.593639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.593834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.593859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.594008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.594033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.594262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.594290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.594451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.594479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.594650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.594675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.594844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.594872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.595061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.573 [2024-07-23 06:29:41.595089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.573 qpair failed and we were unable to recover it. 00:33:48.573 [2024-07-23 06:29:41.595276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.595302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.595493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.595521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.595748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.595778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.595921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.595946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.596140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.596168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.596354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.596382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.596579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.596604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.596810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.596838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.597029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.597057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.597259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.597284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.597480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.597508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.597685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.597715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.597882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.597907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.598104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.598133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.598293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.598321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.598512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.598537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.598737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.598765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.598957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.598985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.599152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.599177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.599383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.599411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.599578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.599606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.599836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.599862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.600039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.600066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.600260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.600288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.600451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.600477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.600667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.600697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.600883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.600910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.601080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.601106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.601313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.601338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.601521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.601550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.601717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.601744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.601893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.601934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.602131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.602159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.602354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.602379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.602573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.602602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.602807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.602833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.603032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.603058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.574 [2024-07-23 06:29:41.603238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.574 [2024-07-23 06:29:41.603266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.574 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.603458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.603486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.603681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.603706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.603940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.603969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.604135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.604164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.604372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.604401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.604605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.604639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.604830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.604858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.605083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.605108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.605324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.605352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.605575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.605603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.605835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.605862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.606062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.606091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.606281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.606309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.606478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.606503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.606684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.606710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.606877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.606902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.607050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.607076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.607224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.607251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.607433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.607458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.607632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.607658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.607825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.607850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.608016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.608041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.608223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.608248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.608397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.608422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.608595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.608627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.608812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.608838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.608989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.609015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.609186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.609211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.609391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.609416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.609586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.609611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.609801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.609827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.609975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.610001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.610196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.610225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.610448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.610473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.610677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.610703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.610874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.610899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.611103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.575 [2024-07-23 06:29:41.611131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.575 qpair failed and we were unable to recover it. 00:33:48.575 [2024-07-23 06:29:41.611324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.611350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.611529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.611554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.611732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.611758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.611934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.611960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.612135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.612160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.612343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.612371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.612564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.612589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.612784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.612813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.613018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.613046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.613258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.613284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.613507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.613535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.613694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.613720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.613867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.613892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.614040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.614065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.614260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.614288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.614476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.614501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.614664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.614693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.614883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.614911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.615079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.615104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.615279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.615304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.615475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.615500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.615652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.615678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.615875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.615903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.616118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.616146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.616374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.616399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.616581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.616606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.616789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.616818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.576 [2024-07-23 06:29:41.616984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.576 [2024-07-23 06:29:41.617009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.576 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.617191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.617216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.617414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.617439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.617584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.617611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.617802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.617827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.617996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.618021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.618215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.618241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.618417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.618443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.618623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.618665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.618864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.618889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.619034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.619060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.619235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.619261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.619414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.619441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.619609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.619648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.619850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.619875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.620018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.620043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.620263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.620292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.620483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.620509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.620665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.620692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.620843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.620886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.621108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.621143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.621333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.621359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.621536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.621562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.621736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.621762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.621951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.621976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.622171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.622199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.622441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.622469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.622681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.622707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.622883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.622912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.623128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.623153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.623322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.623347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.623545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.623571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.623732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.623758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.623929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.623954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.624152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.624181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.624414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.624439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.624624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.624650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.624829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.624857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.577 qpair failed and we were unable to recover it. 00:33:48.577 [2024-07-23 06:29:41.625078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.577 [2024-07-23 06:29:41.625106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.625302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.625327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.625477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.625503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.625690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.625716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.625892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.625918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.626091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.626117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.626340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.626368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.626541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.626567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.626777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.626806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.627007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.627033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.627181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.627207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.627376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.627402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.627579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.627604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.627778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.627803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.627969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.627997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.628229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.628254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.628452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.628477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.628625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.628673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.628863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.628888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.629062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.629087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.629311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.629339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.629511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.629536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.629688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.629717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.629924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.629952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.630193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.630221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.630388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.630413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.630605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.630637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.630836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.630864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.631046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.631071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.631243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.631269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.631472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.631497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.631674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.631700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.631862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.631890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.632093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.632119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.632297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.632322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.632474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.632500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.632696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.632722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.632905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.578 [2024-07-23 06:29:41.632930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.578 qpair failed and we were unable to recover it. 00:33:48.578 [2024-07-23 06:29:41.633107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.633135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.633339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.633366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.633539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.633564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.633737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.633766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.633958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.633984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.634161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.634186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.634361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.634386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.634587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.634621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.634769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.634795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.634967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.634996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.635248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.635277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.635469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.635494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.635673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.635699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.635843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.635869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.636072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.636098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.636241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.636266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.636412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.636438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.636593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.636624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.636770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.636795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.636964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.636989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.637185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.637211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.637387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.637412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.637566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.637592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.637747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.637772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.637943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.637975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.638194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.638221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.638443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.638468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.638672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.638702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.638870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.638895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.639064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.639089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.639236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.639263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.639415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.639441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.639618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.639644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.639795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.639821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.639978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.640021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.640213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.640238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.640394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.640419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.579 [2024-07-23 06:29:41.640565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.579 [2024-07-23 06:29:41.640591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.579 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.640790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.640816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.640973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.640998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.641201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.641226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.641423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.641448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.641680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.641710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.641909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.641936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.642157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.642182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.642377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.642406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.642596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.642626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.642815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.642841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.643011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.643039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.643260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.643288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.643482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.643508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.643661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.643687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.643863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.643889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.644062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.644087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.644227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.644268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.644467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.644493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.644676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.644703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.644873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.644902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.645063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.645093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.645312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.645337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.645535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.645563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.645769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.645795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.645944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.645970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.646160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.646188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.646415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.646448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.646643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.646679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.646852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.646878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.647024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.580 [2024-07-23 06:29:41.647050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.580 qpair failed and we were unable to recover it. 00:33:48.580 [2024-07-23 06:29:41.647243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.581 [2024-07-23 06:29:41.647268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.581 qpair failed and we were unable to recover it. 00:33:48.581 [2024-07-23 06:29:41.647447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.581 [2024-07-23 06:29:41.647475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.581 qpair failed and we were unable to recover it. 00:33:48.581 [2024-07-23 06:29:41.647697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.581 [2024-07-23 06:29:41.647723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.581 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.647898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.647923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.648099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.648125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.648328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.648356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.648528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.648553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.648748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.648777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.648943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.648972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.649144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.649170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.649357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.649382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.649577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.649605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.649838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.649863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.650089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.650118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.650305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.650333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.650560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.650585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.650764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.650793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.650987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.651015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.651210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.651235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.651425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.651454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.651649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.651678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.651871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.651897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.652069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.652096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.652286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.652314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.652509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.652534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.652759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.652788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.652978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.653006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.653176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.653201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.653391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.653419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.653609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.653645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.653841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.653866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.654051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.654079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.654247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.654275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.654472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.654497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.654654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.654680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.654881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.654907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.655075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.655104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.655304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.655333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.655537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.655563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.655764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.582 [2024-07-23 06:29:41.655790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.582 qpair failed and we were unable to recover it. 00:33:48.582 [2024-07-23 06:29:41.655999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.656024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.656180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.656205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.656380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.656406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.656598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.656645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.656875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.656903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.657122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.657147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.657339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.657367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.657553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.657581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.657750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.657775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.658000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.658028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.658231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.658260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.658454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.658479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.658652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.658678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.658847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.658875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.659069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.659095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.659272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.659298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.659494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.659523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.659710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.659737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.659961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.659990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.660174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.660202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.660393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.660418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.660582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.660611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.660838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.660866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.661070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.661095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.661245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.661271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.661467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.661495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.661695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.661721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.661916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.661944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.662108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.662137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.662330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.662356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.662545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.662573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.662777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.662803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.662982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.663008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.663157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.663183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.663386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.663414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.663623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.663649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.663843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.663877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.664068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.583 [2024-07-23 06:29:41.664096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.583 qpair failed and we were unable to recover it. 00:33:48.583 [2024-07-23 06:29:41.664265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.664291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.664450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.664477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.664648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.664674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.664851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.664876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.665037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.665065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.665251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.665280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.665453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.665479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.665639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.665668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.665856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.665884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.666047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.666073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.666304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.666333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.666559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.666585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.666764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.666791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.667001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.667026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.667244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.667272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.667497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.667523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.667673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.667700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.667888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.667917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.668137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.668163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.668306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.668332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.668487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.668529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.668721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.668748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.668971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.669000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.669219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.669248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.669470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.669495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.669697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.669726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.669945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.669973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.670171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.670196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.670395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.670422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.670593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.670625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.670848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.670875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.671071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.671100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.671289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.671317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.671517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.671543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.671740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.671770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.671992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.672018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.672200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.672225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.672423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.584 [2024-07-23 06:29:41.672451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.584 qpair failed and we were unable to recover it. 00:33:48.584 [2024-07-23 06:29:41.672644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.672677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.672844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.672870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.673064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.673092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.673283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.673311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.673484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.673509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.673704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.673732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.673900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.673929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.674091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.674117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.674306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.674334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.674551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.674577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.674730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.674757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.674951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.674980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.675200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.675228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.675427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.675452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.675604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.675638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.675788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.675814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.675989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.676014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.676231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.676259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.676445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.676473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.676643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.676669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.676857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.676886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.677094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.677119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.677323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.677349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.677529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.677555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.677733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.677759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.677957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.677982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.678185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.678212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.678421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.678447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.678623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.678649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.678846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.678874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.679050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.679076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.679256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.679282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.679505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.679533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.679735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.679763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.679941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.679966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.680147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.680172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.680371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.680399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.680626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.585 [2024-07-23 06:29:41.680652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.585 qpair failed and we were unable to recover it. 00:33:48.585 [2024-07-23 06:29:41.680821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.680849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.681040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.681068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.681259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.681288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.681437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.681463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.681620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.681645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.681797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.681822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.681982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.682010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.682203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.682231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.682422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.682448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.682642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.682671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.682842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.682870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.683058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.683083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.683238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.683264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.683410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.683436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.683617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.683643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.683839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.683867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.684067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.684093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.684294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.684319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.684506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.684534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.684759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.684789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.684968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.684995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.685189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.685219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.685385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.685413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.685604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.685636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.685789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.685830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.686048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.686076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.686266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.686292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.686456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.686484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.686700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.586 [2024-07-23 06:29:41.686728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.586 qpair failed and we were unable to recover it. 00:33:48.586 [2024-07-23 06:29:41.686950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.686975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.687152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.687181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.687368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.687396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.687626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.687652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.687805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.687831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.688026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.688054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.688255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.688280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.688473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.688501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.688727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.688753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.688902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.688928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.689105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.689130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.689333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.689361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.689576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.689602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.689804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.689836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.689994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.690023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.690218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.690243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.690464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.690492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.690712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.690741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.690907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.690933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.691112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.691137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.691323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.691352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.691544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.691569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.691772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.691801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.692025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.692053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.692250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.692275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.692499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.692527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.692742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.692771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.692974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.692999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.693195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.693223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.693445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.693474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.693673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.693699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.693896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.693924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.694082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.694110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.694305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.694330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.694503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.694528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.694699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.694727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.694926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.694951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.695152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.587 [2024-07-23 06:29:41.695182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.587 qpair failed and we were unable to recover it. 00:33:48.587 [2024-07-23 06:29:41.695410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.695435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.695580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.695606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.695761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.695786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.695964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.695990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.696178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.696203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.696394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.696422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.696639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.696665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.696844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.696870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.697029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.697057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.697239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.697267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.697437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.697463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.697683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.697712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.697923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.697951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.698143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.698169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.698319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.698345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.698518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.698544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.698746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.698772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.698948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.698977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.699166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.699194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.699391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.699416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.699608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.699642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.699810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.699838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.700012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.700038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.700265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.700293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.700476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.700504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.700681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.700707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.700897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.700926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.701089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.701117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.701310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.701337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.701511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.701538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.701760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.701789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.701990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.702015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.702207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.702235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.702431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.702459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.702621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.702646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.702789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.702815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.702988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.703014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.588 [2024-07-23 06:29:41.703187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.588 [2024-07-23 06:29:41.703212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.588 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.703360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.703387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.703581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.703609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.703813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.703840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.704015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.704044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.704270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.704300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.704469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.704494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.704713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.704742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.704937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.704966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.705158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.705183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.705333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.705359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.705502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.705543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.705747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.705773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.705914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.705940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.706155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.706183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.706384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.706410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.706560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.706585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.706762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.706788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.706960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.706985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.707163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.707193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.707382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.707412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.707580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.707605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.707766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.707792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.707936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.707961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.708110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.708135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.708281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.708324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.708548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.708576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.708816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.708842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.709041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.709069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.709259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.709287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.709484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.709509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.709682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.709709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.709864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.709890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.710090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.710116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.710310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.710339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.710502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.710530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.710722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.710748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.710951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.710977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.589 [2024-07-23 06:29:41.711166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.589 [2024-07-23 06:29:41.711195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.589 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.711386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.711411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.711638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.711667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.711883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.711911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.712103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.712128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.712325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.712353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.712571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.712599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.712812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.712842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.713018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.713044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.713268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.713297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.713522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.713547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.713699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.713725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.713919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.713947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.714167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.714192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.714369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.714394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.714563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.714592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.714769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.714795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.714946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.714972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.715152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.715177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.715360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.715387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.715588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.715620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.715831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.715861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.716078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.716103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.716268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.716295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.716490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.716515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.716663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.716689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.716873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.716899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.717102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.717131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.717346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.717372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.717577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.717605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.717806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.717834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.718003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.718029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.718247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.718276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.718439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.718467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.718641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.718668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.718864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.718890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.719092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.719120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.719314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.719339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.590 qpair failed and we were unable to recover it. 00:33:48.590 [2024-07-23 06:29:41.719514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.590 [2024-07-23 06:29:41.719540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.719733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.719766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.719966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.719992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.720171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.720197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.720366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.720395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.720591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.720624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.720817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.720845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.721056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.721084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.721306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.721331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.721541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.721570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.721725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.721752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.721905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.721930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.722154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.722182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.722365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.722393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.722625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.722651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.722828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.722856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.723056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.723083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.723264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.723289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.723484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.723512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.723710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.723739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.723935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.723959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.724135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.724163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.724363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.724388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.724567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.724592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.724790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.724835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.725022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.725050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.725206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.725234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.725408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.725434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.725596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.725633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.725827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.725853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.726093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.726142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.726340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.726367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.591 [2024-07-23 06:29:41.726561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.591 [2024-07-23 06:29:41.726588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.591 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.726776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.726807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.726998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.727026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.727196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.727222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.727423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.727480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.727705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.727731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.727896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.727922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.728140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.728189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.728380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.728408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.728634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.728660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.728806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.728832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.728988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.729031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.729221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.729246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.729456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.729485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.729656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.729682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.729835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.729860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.730084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.730135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.730346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.730379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.730573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.730598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.730779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.730824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.730993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.731023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.731191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.731218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.731424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.731478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.731641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.731672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.731874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.731900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.732099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.732128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.732326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.732355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.732552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.732578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.732732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.732776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.732991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.733020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.733194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.733221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.733417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.733447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.733635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.733664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.733836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.733863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.734051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.734079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.734252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.734281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.734450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.734476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.734666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.734692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.592 qpair failed and we were unable to recover it. 00:33:48.592 [2024-07-23 06:29:41.734916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.592 [2024-07-23 06:29:41.734944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.735114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.735141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.735407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.735458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.735657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.735684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.735856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.735881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.736132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.736183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.736409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.736437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.736624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.736650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.736809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.736837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.737007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.737035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.737269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.737294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.737488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.737516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.737672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.737701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.737891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.737916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.738143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.738194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.738381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.738409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.738603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.738641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.738812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.738841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.739004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.739032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.739223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.739252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.739499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.739548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.739731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.739760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.739930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.739956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.740125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.740154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.740342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.740370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.740567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.740592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.740779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.740823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.741022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.741052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.741254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.741279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.741522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.741572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.741782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.741808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.741951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.741977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.742179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.742204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.742422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.742451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.742650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.742676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.742841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.742872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.743065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.743093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.743295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.593 [2024-07-23 06:29:41.743321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.593 qpair failed and we were unable to recover it. 00:33:48.593 [2024-07-23 06:29:41.743514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.743544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.743712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.743742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.743939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.743964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.744243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.744295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.744484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.744512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.744691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.744718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.744886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.744912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.745065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.745091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.745239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.745265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.745494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.745522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.745744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.745773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.745977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.746002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.746359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.746417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.746608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.746650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.746851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.746878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.747024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.747066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.747258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.747286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.747457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.747484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.747683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.747712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.747903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.747932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.748126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.748152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.748311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.748348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.748583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.748609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.748790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.748816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.749012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.749041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.749229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.749258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.749424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.749450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.749652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.749678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.749839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.749868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.750064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.750090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.750237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.750264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.750499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.750528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.750696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.750723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.750944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.750973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.751138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.751168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.751348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.751373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.751553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.751580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.751754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.594 [2024-07-23 06:29:41.751781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.594 qpair failed and we were unable to recover it. 00:33:48.594 [2024-07-23 06:29:41.751959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.751985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.752191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.752222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.752381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.752409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.752577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.752603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.752826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.752854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.753024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.753051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.753229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.753254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.753508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.753555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.753775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.753801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.753950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.753975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.754172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.754201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.754401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.754430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.754603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.754642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.754817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.754842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.755044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.755072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.755286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.755311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.755474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.755502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.755697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.755726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.755895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.755922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.756146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.756197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.756410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.756439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.756606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.756638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.756819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.756844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.757012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.757045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.757243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.757268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.757505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.757544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.757713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.757741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.757958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.757985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.758182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.758211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.758400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.758428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.758628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.758655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.758857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.595 [2024-07-23 06:29:41.758886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.595 qpair failed and we were unable to recover it. 00:33:48.595 [2024-07-23 06:29:41.759068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.759094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.759297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.759323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.759519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.759547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.759714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.759745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.759944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.759971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.760203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.760253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.760444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.760473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.760672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.760699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.760850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.760878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.761074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.761103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.761324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.761350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.761573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.761602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.761769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.761797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.762018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.762043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.762213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.762242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.762431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.762460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.762656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.762682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.762875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.762903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.763099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.763127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.763336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.763361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.763507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.763533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.763682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.763725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.763946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.763971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.764212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.764260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.764425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.764455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.764639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.764665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.764809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.764835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.765101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.765130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.765299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.765325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.765495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.765520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.765714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.765744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.765943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.765974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.766131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.766157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.766301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.766344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.766535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.766560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.766763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.766792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.766984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.767012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.767205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.596 [2024-07-23 06:29:41.767230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.596 qpair failed and we were unable to recover it. 00:33:48.596 [2024-07-23 06:29:41.767424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.767452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.767674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.767703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.767881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.767907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.768058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.768084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.768274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.768302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.768558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.768584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.768787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.768817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.769051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.769076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.769239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.769264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.769464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.769492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.769663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.769691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.769893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.769918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.770113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.770141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.770333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.770362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.770525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.770551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.770777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.770806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.770971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.771000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.771221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.771246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.771418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.771447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.771611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.771645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.771799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.771825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.771967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.771993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.772228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.772254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.772404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.772429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.772601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.772635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.772794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.772822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.772987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.773013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.773211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.773236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.773433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.773461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.773630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.773657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.773866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.773895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.774094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.774121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.774330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.774356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.774556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.774588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.774797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.774837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.775019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.775045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.775247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.775275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.775488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.597 [2024-07-23 06:29:41.775538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.597 qpair failed and we were unable to recover it. 00:33:48.597 [2024-07-23 06:29:41.775764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.775791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.775956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.775984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.776235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.776286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.776462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.776487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.776665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.776691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.776830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.776856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.777028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.777053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.777254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.777284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.777505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.777533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.777768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.777794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.777999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.778027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.778250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.778298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.778507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.778532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.778709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.778734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.778896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.778924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.779112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.779137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.779332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.779360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.779550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.779578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.779761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.779786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.779992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.780019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.780271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.780321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.780510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.780535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.780711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.780751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.780931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.780973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.781172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.781197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.781396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.781420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.781581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.781609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.781833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.781858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.782028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.782053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.782192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.782217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.782417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.782442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.782586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.782611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.782794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.782818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.782995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.783020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.783231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.783259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.783462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.783487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.783659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.783685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.598 [2024-07-23 06:29:41.783826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.598 [2024-07-23 06:29:41.783851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.598 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.784108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.784155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.784382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.784407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.784604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.784639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.784830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.784855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.785026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.785051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.785270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.785298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.785488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.785516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.785730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.785755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.785954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.785983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.786201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.786226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.786426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.786452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.786620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.786649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.786814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.786841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.787046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.787071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.787250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.787278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.787559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.787610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.787809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.787835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.788000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.788027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.788223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.788251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.788419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.788444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.788626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.788652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.788825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.788850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.789022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.789047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.789271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.789299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.789479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.789504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.789673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.789702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.789884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.789928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.790112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.790137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.790288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.790313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.790529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.790557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.790744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.790769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.790971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.790995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.791183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.791211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.791422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.791450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.791667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.791693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.791897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.791925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.792084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.792111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.792302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.599 [2024-07-23 06:29:41.792327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.599 qpair failed and we were unable to recover it. 00:33:48.599 [2024-07-23 06:29:41.792515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.792542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.792737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.792765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.792963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.792988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.793179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.793206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.793363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.793390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.793588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.793619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.793796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.793821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.793966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.793991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.794163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.794188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.794411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.794439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.794642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.794671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.794856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.794881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.795069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.795096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.795265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.795293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.795482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.795511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.795688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.795717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.795906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.795933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.796128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.796153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.796353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.796381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.796572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.796599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.796784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.796810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.797001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.797029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.797231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.797256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.797432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.797457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.797653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.797682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.797878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.797906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.798098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.798123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.798333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.798361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.798567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.798595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.798843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.798868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.799043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.799071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.799231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.799259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.600 qpair failed and we were unable to recover it. 00:33:48.600 [2024-07-23 06:29:41.799431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.600 [2024-07-23 06:29:41.799456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.799633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.799658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.799849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.799876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.800043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.800068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.800213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.800237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.800468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.800496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.800672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.800698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.800835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.800878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.801104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.801132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.801357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.801386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.801555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.801583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.801787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.801813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.801966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.801991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.802131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.802155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.802336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.802360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.802561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.802587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.802801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.802829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.803037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.803062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.803270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.803295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.803497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.803525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.803715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.803744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.803914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.803939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.804088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.804113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.804284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.804309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.804455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.804481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.804657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.804684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.804821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.804845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.805020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.805045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.805269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.805297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.805461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.805488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.805652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.805678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.805888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.805916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.806111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.806136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.806305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.806330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.806498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.806523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.806724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.806753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.806946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.806971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.807140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.807168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.601 [2024-07-23 06:29:41.807388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.601 [2024-07-23 06:29:41.807416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.601 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.807580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.807605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.807770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.807795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.808017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.808045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.808217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.808243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.808432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.808460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.808646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.808675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.808849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.808875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.809044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.809072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.809259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.809287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.809484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.809511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.809712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.809737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.809884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.809927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.810103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.810128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.810388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.810415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.810599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.810640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.810847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.810872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.811090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.811118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.811382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.811410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.811608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.811641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.811811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.811836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.812017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.812045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.812219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.812244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.812435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.812463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.812635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.812663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.812856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.812881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.813089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.813114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.813292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.813317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.813516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.813542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.813771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.813799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.813999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.814024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.814223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.814248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.814418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.814445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.814609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.814718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.814916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.814941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.815150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.815177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.815382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.815407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.815580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.815606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.815788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.602 [2024-07-23 06:29:41.815813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.602 qpair failed and we were unable to recover it. 00:33:48.602 [2024-07-23 06:29:41.816007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.816043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.816242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.816267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.816459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.816487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.816671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.816700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.816892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.816917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.817114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.817142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.817341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.817369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.817564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.817589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.817819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.817847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.818052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.818077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.818335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.818360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.818587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.818635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.818797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.818826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.819048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.819073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.819273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.819301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.819521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.819546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.819698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.819724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.819950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.819978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.820177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.820203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.820354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.820379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.820566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.820596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.820821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.820850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.821052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.821077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.821247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.821274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.821464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.821493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.821693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.821719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.821878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.821904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.822098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.822129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.822350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.822375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.822608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.822641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.822817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.822844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.823065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.823090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.823287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.823315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.823484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.823511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.823705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.823730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.823891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.823916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.824137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.824164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.824378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.824402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.603 qpair failed and we were unable to recover it. 00:33:48.603 [2024-07-23 06:29:41.824618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.603 [2024-07-23 06:29:41.824647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.824835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.824862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.825078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.825103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.825276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.825301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.825495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.825523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.825706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.825732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.825929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.825957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.826140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.826168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.826342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.826367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.826515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.826539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.826709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.826734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.826907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.826933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.827155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.827182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.827348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.827376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.827566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.827591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.827815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.827844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.828067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.828092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.828239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.828264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.828462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.828486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.828666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.828695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.828888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.828913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.829103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.829132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.829318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.829346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.829539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.829564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.829765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.829794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.829989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.830017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.830176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.830202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.830371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.830396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.830637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.830666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.830862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.830886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.831061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.831090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.831253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.831282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.831472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.831497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.831660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.831691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.831880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.831908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.604 [2024-07-23 06:29:41.832107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.604 [2024-07-23 06:29:41.832132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.604 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.832301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.832327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.832499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.832525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.832695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.832721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.832920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.832949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.833171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.833200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.833420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.833445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.833599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.833631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.833853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.833881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.834051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.834076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.834299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.834326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.834550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.834577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.834795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.834821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.834994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.835021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.835221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.835249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.891 [2024-07-23 06:29:41.835470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.891 [2024-07-23 06:29:41.835495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.891 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.835675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.835700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.835850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.835875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.836013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.836037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.836186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.836226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.836416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.836444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.836625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.836650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.836846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.836878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.837037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.837064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.837264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.837289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.837441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.837467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.837657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.837686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.837889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.837914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.838114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.838142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.838326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.838354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.838549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.838574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.838783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.838812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.838973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.839001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.839166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.839190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.839413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.839441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.839620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.839649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.839818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.839844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.840017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.840042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.840257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.840286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.840511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.840536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.840733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.840761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.840953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.840982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.841170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.841195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.841366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.841391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.841583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.841608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.841786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.841811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.842033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.842060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.842224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.842253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.842426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.842451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.842645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.842678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.842864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.842891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.843087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.843112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.843329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.843357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.892 qpair failed and we were unable to recover it. 00:33:48.892 [2024-07-23 06:29:41.843520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.892 [2024-07-23 06:29:41.843548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.843748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.843774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.843952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.843979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.844174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.844201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.844394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.844419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.844623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.844653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.844847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.844875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.845065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.845091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.845264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.845289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.845432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.845456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.845636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.845662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.845874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.845902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.846102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.846130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.846322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.846347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.846541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.846569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.846746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.846771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.846948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.846973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.847138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.847167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.847353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.847382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.847553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.847579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.847757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.847782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.847954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.847980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.848150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.848175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.848344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.848378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.848604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.848636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.848817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.848842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.848993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.849018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.849166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.849191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.849340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.849365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.849559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.849588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.849774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.849799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.849999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.850024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.850240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.850268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.850461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.850489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.850684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.850710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.850868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.850893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.851064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.851106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.851323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.851374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.893 qpair failed and we were unable to recover it. 00:33:48.893 [2024-07-23 06:29:41.851542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.893 [2024-07-23 06:29:41.851570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.851778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.851805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.851954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.851979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.852195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.852223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.852442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.852470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.852632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.852658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.852858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.852898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.853082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.853110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.853328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.853353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.853555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.853581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.853764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.853790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.853964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.853989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.854205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.854233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.854427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.854455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.854655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.854681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.854836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.854862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.855060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.855085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.855288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.855313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.855513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.855541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.855726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.855755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.855946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.855971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.856168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.856196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.856397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.856425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.856611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.856644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.856820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.856847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.857036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.857065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.857231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.857256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.857432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.857458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.857628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.857654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.857834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.857859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.858040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.858068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.858258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.858286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.858451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.858476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.858654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.858683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.858873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.858903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.859093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.859119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.859273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.859298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.894 qpair failed and we were unable to recover it. 00:33:48.894 [2024-07-23 06:29:41.859498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.894 [2024-07-23 06:29:41.859526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.859716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.859742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.859934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.859962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.860182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.860210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.860379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.860404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.860589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.860620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.860863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.860891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.861062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.861087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.861278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.861306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.861469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.861497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.861690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.861716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.861884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.861912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.862113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.862141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.862332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.862357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.862552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.862580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.862778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.862804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.862978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.863018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.863228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.863270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.863476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.863501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.863676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.863703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.863880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.863909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.864075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.864104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.864297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.864323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.864555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.864583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.864805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.864830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.865018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.865045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.865243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.865271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.865474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.865500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.865683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.865709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.865884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.865913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.866105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.866134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.866333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.866359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.866528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.866556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.866765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.866791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.866971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.866996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.867149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.867174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.867346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.867372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.867529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.867557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.895 [2024-07-23 06:29:41.867767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.895 [2024-07-23 06:29:41.867794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.895 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.867981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.868007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.868161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.868186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.868359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.868384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.868589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.868624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.868829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.868863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.869038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.869063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.869252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.869279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.869436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.869466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.869643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.869671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.869896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.869924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.870092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.870120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.870314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.870339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.870536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.870564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.870777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.870803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.871007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.871032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.871232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.871261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.871464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.871490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.871630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.871662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.871838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.871866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.872037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.872066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.872227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.872252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.872429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.872455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.872645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.872674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.872868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.872893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.873071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.873097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.873296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.873324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.873501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.873526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.873742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.873771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.873940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.896 [2024-07-23 06:29:41.873969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.896 qpair failed and we were unable to recover it. 00:33:48.896 [2024-07-23 06:29:41.874163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.874188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.874376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.874404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.874598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.874638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.874876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.874902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.875089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.875117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.875301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.875329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.875496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.875522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.875677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.875703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.875879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.875907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.876097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.876122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.876347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.876375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.876545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.876574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.876774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.876800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.877000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.877028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.877209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.877238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.877435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.877461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.877662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.877691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.877847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.877875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.878072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.878097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.878274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.878299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.878442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.878467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.878640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.878681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.878883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.878912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.879077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.879106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.879276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.879301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.879482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.879511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.879703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.879733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.879923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.879949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.880145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.880173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.880330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.880358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.880563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.880589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.880787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.880815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.880998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.881026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.881238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.881264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.881454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.881483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.881656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.881686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.881902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.881927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.882123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.897 [2024-07-23 06:29:41.882152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.897 qpair failed and we were unable to recover it. 00:33:48.897 [2024-07-23 06:29:41.882342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.882371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.882584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.882610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.882799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.882829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.883017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.883046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.883221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.883247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.883395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.883425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.883602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.883635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.883844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.883869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.884069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.884097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.884257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.884285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.884467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.884495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.884692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.884719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.884868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.884911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.885110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.885136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.885339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.885368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.885568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.885596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.885758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.885783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.885933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.885975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.886138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.886167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.886369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.886395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.886571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.886596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.886806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.886831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.887002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.887027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.887196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.887222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.887420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.887445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.887595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.887629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.887796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.887824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.888005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.888032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.888247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.888272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.888468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.888496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.888699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.888728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.888924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.888950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.889173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.889206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.889398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.889426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.889628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.889653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.889852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.889880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.890071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.890099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-23 06:29:41.890292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.898 [2024-07-23 06:29:41.890318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.890512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.890540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.890756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.890782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.890929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.890956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.891125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.891150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.891345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.891374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.891533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.891559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.891734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.891760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.891910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.891935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.892116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.892141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.892342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.892371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.892538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.892568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.892765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.892790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.893016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.893045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.893264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.893292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.893489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.893514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.893687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.893713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.893868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.893893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.894047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.894073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.894261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.894289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.894479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.894507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.894674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.894701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.894854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.894902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.895091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.895119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.895313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.895338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.895516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.895544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.895738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.895767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.895964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.895989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.896179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.896207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.896396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.896425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.896621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.896647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.896818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.896846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.897004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.897033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.897222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.897248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.897437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.897466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.897660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.897689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.897861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.897887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.898111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.898139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-23 06:29:41.898339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.899 [2024-07-23 06:29:41.898366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.898522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.898548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.898739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.898769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.898936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.898964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.899185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.899210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.899409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.899437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.899653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.899682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.899861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.899887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.900104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.900132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.900301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.900329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.900528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.900553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.900767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.900796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.900990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.901018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.901206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.901232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.901391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.901418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.901620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.901648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.901816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.901842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.902010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.902038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.902229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.902257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.902478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.902503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.902688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.902716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.902904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.902932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.903131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.903157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.903366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.903391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.903584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.903620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.903817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.903843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.904033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.904061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.904250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.904278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.904502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.904528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.904725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.904754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.904979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.905004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.905203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.905228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.905430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.905458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.905649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.900 [2024-07-23 06:29:41.905678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-23 06:29:41.905849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.905874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.906052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.906078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.906245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.906274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.906464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.906488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.906680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.906708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.906881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.906909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.907106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.907131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.907323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.907351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.907545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.907570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.907772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.907798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.908024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.908052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.908264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.908288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.908485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.908510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.908736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.908765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.908967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.908995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.909186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.909211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.909406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.909434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.909658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.909684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.909839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.909869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.910039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.910064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.910254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.910279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.910454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.910480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.910675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.910704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.910873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.910900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.911125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.911150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.911356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.911381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.911599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.911634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.911792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.911818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.911986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.912013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.912209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.912234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.912441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.912466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.912642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.912670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.912877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.912903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.901 [2024-07-23 06:29:41.913073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.901 [2024-07-23 06:29:41.913098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.901 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.913248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.913273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.913453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.913481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.913715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.913741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.913916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.913944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.914138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.914166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.914368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.914393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.914558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.914583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.914742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.914768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.914917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.914943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.915137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.915165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.915384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.915412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.915607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.915646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.915849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.915877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.916069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.916097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.916269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.916295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.916467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.916492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.916691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.916717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.916913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.916938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.917112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.917141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.917339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.917365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.917563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.917588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.917772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.917797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.917989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.918017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.918217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.918242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.918411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.918436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.918633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.918663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.918886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.918911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.919060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.919085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.919272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.919300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.919495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.919520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.919721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.919750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.919970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.919995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.920195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.920220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.920363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.920388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.920555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.920598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.920847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.920873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.921074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.921105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.902 [2024-07-23 06:29:41.921295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.902 [2024-07-23 06:29:41.921323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.902 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.921529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.921555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.921710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.921736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.921925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.921953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.922153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.922178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.922376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.922405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.922594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.922629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.922823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.922848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.923016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.923044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.923232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.923260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.923451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.923476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.923677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.923706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.923895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.923923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.924095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.924121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.924336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.924364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.924524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.924553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.924752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.924778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.924996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.925024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.925224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.925252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.925453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.925479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.925702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.925731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.925917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.925945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.926144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.926169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.926365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.926390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.926566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.926595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.926818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.926845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.927021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.927047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.927225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.927251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.927425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.927450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.927648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.927678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.927878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.927906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.928080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.928105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.928296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.928325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.928520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.928548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.928744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.928770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.928964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.928993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.929186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.929215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.929411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.929438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.903 [2024-07-23 06:29:41.929638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.903 [2024-07-23 06:29:41.929674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.903 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.929875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.929900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.930036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.930062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.930279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.930307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.930501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.930534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.930757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.930783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.930979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.931007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.931227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.931254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.931453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.931478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.931678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.931707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.931894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.931923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.932123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.932148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.932322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.932347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.932547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.932575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.932777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.932802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.933011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.933039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.933247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.933273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.933417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.933441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.933642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.933671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.933839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.933868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.934061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.934088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.934284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.934312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.934500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.934529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.934719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.934745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.934902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.934931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.935155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.935180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.935378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.935403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.935600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.935636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.935832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.935860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.936024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.936050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.936229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.936254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.936421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.936453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.936630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.936656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.936876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.936904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.937093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.937120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.937282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.937307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.937531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.937559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.937751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.904 [2024-07-23 06:29:41.937776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.904 qpair failed and we were unable to recover it. 00:33:48.904 [2024-07-23 06:29:41.937957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.937982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.938186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.938214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.938401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.938429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.938634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.938659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.938881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.938909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.939085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.939110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.939279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.939304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.939500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.939528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.939716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.939746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.939943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.939969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.940185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.940213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.940435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.940463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.940655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.940681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.940856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.940881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.941075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.941102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.941295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.941320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.941512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.941542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.941763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.941792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.941970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.941995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.942192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.942220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.942420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.942449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.942638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.942664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.942837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.942865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.943031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.943058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.943246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.943271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.943460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.943487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.943682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.943711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.943907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.943933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.944107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.944131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.944349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.944377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.944552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.944577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.905 [2024-07-23 06:29:41.944751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.905 [2024-07-23 06:29:41.944780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.905 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.944961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.944989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.945178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.945204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.945401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.945430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.945633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.945669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.945823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.945848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.946043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.946071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.946292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.946317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.946484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.946509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.946680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.946706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.946921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.946949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.947142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.947167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.947392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.947420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.947642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.947670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.947873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.947899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.948066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.948091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.948292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.948317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.948537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.948563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.948770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.948799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.948968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.948996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.949162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.949187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.949354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.949382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.949583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.949611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.949827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.949852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.950081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.950109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.950318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.950343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.950516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.950541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.950735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.950764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.950973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.950999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.951149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.951173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.951349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.951374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.951584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.951620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.951843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.951868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.952056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.952083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.952278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.952304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.952500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.952524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.952735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.952761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.906 [2024-07-23 06:29:41.952985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.906 [2024-07-23 06:29:41.953013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.906 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.953210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.953235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.953435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.953463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.953659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.953688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.953858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.953883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.954058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.954083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.954286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.954311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.954481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.954507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.954676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.954706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.954924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.954952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.955170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.955195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.955368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.955397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.955586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.955620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.955822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.955847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.956001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.956026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.956199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.956224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.956396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.956421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.956611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.956646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.956851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.956876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.957051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.957075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.957270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.957302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.957516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.957544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.957743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.957770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.958002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.958030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.958221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.958249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.958446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.958472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.958669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.958699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.958879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.958904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.959082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.959107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.959277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.959302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.959466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.959491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.959635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.959661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.959826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.959851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.959998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.960023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.960197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.960223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.960395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.960424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.960621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.960646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.960818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.960843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.961013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.907 [2024-07-23 06:29:41.961038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.907 qpair failed and we were unable to recover it. 00:33:48.907 [2024-07-23 06:29:41.961231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.961258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.961458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.961483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.961649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.961685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.961889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.961917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.962120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.962146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.962343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.962368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.962574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.962604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.962787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.962812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.963016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.963048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.963231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.963259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.963432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.963458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.963635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.963672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.963872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.963914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.964114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.964140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.964336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.964365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.964562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.964590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.964805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.964831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.964981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.965007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.965180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.965205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.965355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.965381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.965558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.965584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.965814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.965839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.966036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.966062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.966248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.966277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.966447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.966475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.966674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.966700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.966859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.966890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.967048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.967076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.967265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.967290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.967490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.967518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.967733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.967762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.967934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.967960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.968190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.968218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.968417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.968445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.968674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.968700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.968927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.968955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.969114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.969142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.969331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.969357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.908 qpair failed and we were unable to recover it. 00:33:48.908 [2024-07-23 06:29:41.969530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.908 [2024-07-23 06:29:41.969556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.969754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.969783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.969951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.969976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.970155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.970180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.970398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.970426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.970605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.970640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.970815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.970840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.970985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.971010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.971178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.971203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.971412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.971440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.971630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.971659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.971892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.971917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.972095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.972124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.972341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.972370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.972538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.972563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.972742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.972771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.972965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.972993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.973188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.973213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.973407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.973435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.973599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.973636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.973828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.973853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.974050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.974079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.974264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.974292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.974467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.974492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.974680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.974708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.974938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.974963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.975140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.975166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.975369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.975397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.975578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.975606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.975783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.975809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.976005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.976034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.976201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.976229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.976385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.976411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.976608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.976643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.976809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.976837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.977014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.977040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.977232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.977260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.977424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.977453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.909 [2024-07-23 06:29:41.977678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.909 [2024-07-23 06:29:41.977708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.909 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.977901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.977929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.978123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.978151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.978346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.978372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.978568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.978596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.978783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.978809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.979010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.979035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.979243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.979269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.979449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.979475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.979681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.979707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.979954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.979982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.980176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.980205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.980405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.980430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.980599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.980636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.980861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.980887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.981029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.981053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.981200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.981225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.981375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.981417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.981607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.981639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.981799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.981827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.982015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.982043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.982215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.982240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.982454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.982483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.982748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.982777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.983003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.983028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.983239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.983265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.983452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.983480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.983675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.983705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.983868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.983896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.984057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.984085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.984310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.984336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.984502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.984530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.984743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.910 [2024-07-23 06:29:41.984772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.910 qpair failed and we were unable to recover it. 00:33:48.910 [2024-07-23 06:29:41.984958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.984983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.985144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.985172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.985350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.985375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.985563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.985588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.985820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.985848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.986042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.986070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.986266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.986291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.986482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.986510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.986720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.986749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.986978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.987003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.987211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.987236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.987447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.987476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.987694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.987721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.987895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.987924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.988111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.988140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.988333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.988359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.988579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.988607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.988832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.988858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.989053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.989079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.989275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.989304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.989500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.989525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.989698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.989727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.989951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.989979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.990154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.990183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.990381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.990406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.990606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.990641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.990839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.990864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.991033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.991058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.991254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.991283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.991476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.991505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.991702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.991728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.991920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.991948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.992139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.992169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.992373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.992398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.992584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.992620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.992825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.992853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.993055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.993081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.993282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.911 [2024-07-23 06:29:41.993307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.911 qpair failed and we were unable to recover it. 00:33:48.911 [2024-07-23 06:29:41.993527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.993555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.993787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.993813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.994005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.994033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.994238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.994263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.994401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.994427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.994596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.994629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.994839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.994868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.995068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.995094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.995290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.995318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.995475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.995503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.995678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.995704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.995929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.995958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.996143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.996169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.996336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.996361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.996566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.996595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.996829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.996854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.996999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.997025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.997219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.997247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.997400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.997428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.997638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.997663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.997885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.997913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.998123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.998152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.998348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.998373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.998547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.998573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.998807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.998836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.999056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.999082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.999243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.999269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.999459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.999487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.999650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.999675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:41.999844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:41.999873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.000067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.000092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.000286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.000312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.000537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.000565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.000782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.000808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.000982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.001008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.001205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.001233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.001404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.001429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.001576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.001603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.912 qpair failed and we were unable to recover it. 00:33:48.912 [2024-07-23 06:29:42.001815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.912 [2024-07-23 06:29:42.001844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.002023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.002051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.002269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.002294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.002490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.002518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.002684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.002713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.002907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.002932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.003099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.003127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.003342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.003367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.003540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.003566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.003797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.003826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.004052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.004080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.004302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.004328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.004519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.004547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.004730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.004760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.004931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.004956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.005117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.005145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.005336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.005365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.005563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.005588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.005838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.005867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.006041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.006066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.006201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.006226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.006416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.006444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.006633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.006662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.006826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.006851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.007002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.007028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.007253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.007281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.007482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.007507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.007704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.007733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.007962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.007990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.008214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.008239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.008437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.008465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.008630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.008659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.008852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.008878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.009044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.009068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.009263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.009288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.009496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.009521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.009723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.009751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.009938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.009966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.913 [2024-07-23 06:29:42.010161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.913 [2024-07-23 06:29:42.010186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.913 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.010379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.010407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.010569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.010602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.010782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.010807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.011028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.011056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.011245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.011273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.011466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.011490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.011682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.011711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.011903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.011931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.012093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.012118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.012334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.012362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.012527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.012555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.012741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.012766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.012914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.012939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.013089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.013114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.013284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.013309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.013517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.013546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.013765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.013794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.013969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.013995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.014163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.014192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.014390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.014415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.014560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.014585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.014791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.014820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.014996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.015021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.015219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.015245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.015459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.015485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.015675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.015704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.015897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.015922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.016144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.016172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.016371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.016400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.016629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.016655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.016850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.016878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.017059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.017084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.017229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.017255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.017462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.017490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.017690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.914 [2024-07-23 06:29:42.017717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.914 qpair failed and we were unable to recover it. 00:33:48.914 [2024-07-23 06:29:42.017894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.017920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.018114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.018142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.018315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.018343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.018515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.018540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.018767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.018796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.018962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.018990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.019161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.019186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.019340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.019383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.019589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.019637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.019841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.019867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.020008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.020033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.020261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.020289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.020487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.020512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.020700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.020726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.020932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.020961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.021149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.021174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.021364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.021392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.021596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.021628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.021827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.021852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.022060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.022085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.022225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.022250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.022458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.022483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.022650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.022678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.022842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.022870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.023032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.023057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.023234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.023259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.023462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.023490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.023711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.023737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.023933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.023962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.024125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.024154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.024348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.024373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.024540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.024568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.024778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.024805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.024959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.024984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.025158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.025187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.915 qpair failed and we were unable to recover it. 00:33:48.915 [2024-07-23 06:29:42.025361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.915 [2024-07-23 06:29:42.025387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.025582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.025607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.025809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.025837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.026035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.026063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.026255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.026280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.026488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.026516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.026682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.026711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.026929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.026954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.027125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.027153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.027311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.027339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.027564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.027589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.027815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.027844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.028004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.028031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.028237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.028262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.028414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.028439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.028642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.028668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.028869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.028896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.029097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.029123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.029273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.029298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.029475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.029501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.029698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.029723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.029955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.029983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.030154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.030181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.030403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.030431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.030597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.030631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.030804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.030830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.031007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.031036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.031208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.031234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.031400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.031425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.031627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.031661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.031893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.031921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.032116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.032142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.032311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.032340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.032544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.032570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.032793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.032819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.033040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.033069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.033224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.033250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.033480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.033504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.916 [2024-07-23 06:29:42.033681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.916 [2024-07-23 06:29:42.033706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.916 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.033878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.033906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.034131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.034156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.034354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.034382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.034569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.034595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.034755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.034780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.035007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.035035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.035240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.035265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.035440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.035466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.035678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.035706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.035903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.035928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.036098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.036123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.036344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.036372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.036539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.036567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.036791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.036816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.036972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.037007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.037174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.037203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.037400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.037425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.037575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.037601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.037786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.037812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.037958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.037984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.038158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.038183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.038351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.038376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.038551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.038576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.038754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.038784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.039019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.039047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.039274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.039299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.039487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.039516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.039744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.039773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.039993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.040019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.040252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.040280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.040447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.040475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.040661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.040687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.040858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.040886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.041057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.041085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.041249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.041274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.041449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.041475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.041678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.041706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.041914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.917 [2024-07-23 06:29:42.041939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.917 qpair failed and we were unable to recover it. 00:33:48.917 [2024-07-23 06:29:42.042131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.042159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.042327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.042355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.042546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.042574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.042774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.042802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.042973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.043001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.043227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.043253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.043443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.043470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.043668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.043697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.043888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.043913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.044100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.044127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.044306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.044331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.044502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.044527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.044722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.044751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.044939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.044968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.045132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.045158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.045379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.045407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.045573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.045600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.045796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.045827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.045992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.046020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.046218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.046243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.046420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.046446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.046642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.046674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.046893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.046921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.047093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.047119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.047288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.047313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.047509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.047537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.047736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.047762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.047955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.047984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.048176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.048204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.048405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.048430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.048580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.048607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.048793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.048821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.049046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.049071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.049251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.049279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.049464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.049492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.049668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.049694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.049838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.918 [2024-07-23 06:29:42.049863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.918 qpair failed and we were unable to recover it. 00:33:48.918 [2024-07-23 06:29:42.050051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.050079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.050249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.050275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.050471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.050501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.050700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.050726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.050903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.050928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.051124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.051152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.051343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.051371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.051591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.051636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.051812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.051840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.052032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.052060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.052228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.052252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.052472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.052500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.052714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.052751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.052945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.052970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.053190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.053218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.053439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.053464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.053639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.053670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.053854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.053882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.054074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.054103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.054297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.054322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.054512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.054541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.054713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.054741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.054913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.054938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.055125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.055153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.055371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.055400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.055625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.055651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.055812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.055838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.056062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.056090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.056254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.056280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.056476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.056504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.056710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.056736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.056890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.056915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.919 qpair failed and we were unable to recover it. 00:33:48.919 [2024-07-23 06:29:42.057087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.919 [2024-07-23 06:29:42.057112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.057304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.057332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.057532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.057562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.057772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.057801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.058018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.058046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.058212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.058238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.058409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.058434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.058582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.058607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.058798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.058824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.059019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.059047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.059216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.059245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.059438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.059463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.059640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.059669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.059856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.059885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.060077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.060103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.060298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.060326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.060523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.060551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.060742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.060768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.060908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.060934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.061105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.061130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.061326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.061351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.061545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.061573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.061791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.061816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.061955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.061980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.062175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.062203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.062419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.062447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.062623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.062649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.062806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.062831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.063021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.063051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.063269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.063294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.063498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.063526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.063717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.063751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.063976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.064001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.064154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.064179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.064369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.064397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.064625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.064650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.064856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.064884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.065108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.065133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.920 qpair failed and we were unable to recover it. 00:33:48.920 [2024-07-23 06:29:42.065310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.920 [2024-07-23 06:29:42.065336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.065534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.065559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.065731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.065756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.065895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.065920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.066083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.066110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.066315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.066343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.066532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.066557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.066714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.066742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.066939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.066964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.067111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.067137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.067329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.067358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.067555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.067583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.067792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.067818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.068014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.068042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.068260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.068287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.068516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.068541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.068762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.068792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.068951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.068979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.069170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.069195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.069421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.069449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.069671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.069699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.069925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.069950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.070126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.070153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.070342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.070369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.070553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.070578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.070736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.070762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.070982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.071010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.071208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.071233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.071423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.071451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.071666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.071695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.071915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.071940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.072131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.072159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.072384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.072416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.072611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.072641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.072856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.072881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.073031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.073056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.073228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.073253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.073447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.073475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.921 [2024-07-23 06:29:42.073667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.921 [2024-07-23 06:29:42.073696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.921 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.073870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.073894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.074059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.074087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.074293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.074318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.074490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.074515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.074694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.074719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.074885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.074909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.075058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.075084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.075280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.075308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.075499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.075527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.075728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.075754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.075921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.075949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.076133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.076161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.076350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.076375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.076564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.076592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.076765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.076791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.076957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.076982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.077204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.077229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.077417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.077445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.077635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.077661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.077831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.077860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.078041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.078070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.078237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.078262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.078439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.078464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.078669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.078697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.078858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.078883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.079055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.079083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.079237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.079263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.079436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.079460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.079626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.079652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.079821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.079849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.080048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.080073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.080263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.080291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.080443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.080471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.080640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.080666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.080817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.080859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.081058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.081086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.081282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.081308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.081474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.922 [2024-07-23 06:29:42.081499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.922 qpair failed and we were unable to recover it. 00:33:48.922 [2024-07-23 06:29:42.081647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.081673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.081854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.081880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.082056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.082082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.082266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.082294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.082493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.082518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.082672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.082698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.082896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.082921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.083145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.083170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.083393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.083418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.083570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.083600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.083830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.083856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.084068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.084093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.084286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.084314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.084483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.084509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.084709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.084735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.084943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.084971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.085160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.085187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.085365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.085392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.085583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.085609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.085786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.085811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.086010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.086038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.086231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.086258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.086444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.086469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.086662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.086700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.086929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.086954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.087156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.087181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.087382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.087410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.087601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.087637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.087833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.087859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.088055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.088083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.088275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.088303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.088476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.088501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.088654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.088697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.088920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.088948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.089142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.089167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.089365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.089393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.089578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.089606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.089816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.923 [2024-07-23 06:29:42.089841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.923 qpair failed and we were unable to recover it. 00:33:48.923 [2024-07-23 06:29:42.090056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.090081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.090229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.090254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.090426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.090452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.090625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.090654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.090882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.090907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.091079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.091104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.091304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.091332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.091524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.091553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.091746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.091772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.091946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.091974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.092188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.092216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.092409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.092435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.092628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.092661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.092855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.092883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.093084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.093110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.093308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.093338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.093505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.093532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.093709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.093735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.093912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.093937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.094122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.094150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.094373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.094398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.094627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.094655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.094879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.094907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.095076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.095101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.095297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.095324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.095521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.095547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.095704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.095730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.095903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.095928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.096103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.096132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.096353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.096378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.096540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.096568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.096767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.096793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.924 [2024-07-23 06:29:42.096944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.924 [2024-07-23 06:29:42.096969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.924 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.097195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.097223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.097451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.097479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.097701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.097727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.097927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.097955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.098145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.098173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.098363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.098388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.098588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.098623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.098815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.098844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.099040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.099065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.099267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.099295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.099490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.099518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.099692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.099718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.099940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.099969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.100162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.100190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.100410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.100436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.100639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.100668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.100851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.100879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.101054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.101079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.101258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.101283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.101470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.101497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.101670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.101696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.101869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.101894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.102089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.102117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.102279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.102304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.102497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.102525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.102688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.102717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.102915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.102940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.103191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.103242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.103448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.103473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.103642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.103668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.103889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.103917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.104104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.104132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.104312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.104337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.104500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.104532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.104736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.104762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.104939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.104964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.105154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.105181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.925 qpair failed and we were unable to recover it. 00:33:48.925 [2024-07-23 06:29:42.105360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.925 [2024-07-23 06:29:42.105388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.105624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.105650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.105854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.105882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.106039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.106067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.106240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.106265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.106457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.106486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.106702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.106731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.106958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.106983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.107285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.107340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.107499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.107528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.107733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.107759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.107933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.107959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.108140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.108168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.108369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.108395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.108572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.108597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.108806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.108834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.109067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.109093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.109259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.109287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.109499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.109527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.109704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.109729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.109884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.109910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.110090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.110115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.110316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.110341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.110531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.110559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.110739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.110765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.110915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.110940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.111114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.111139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.111338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.111366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.111559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.111584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.111789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.111818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.111986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.112013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.112178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.112203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.112390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.112418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.112632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.112661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.112890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.112915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.113085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.113115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.113306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.113335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.113558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.113583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.926 [2024-07-23 06:29:42.113789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.926 [2024-07-23 06:29:42.113818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.926 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.114010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.114039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.114237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.114263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.114484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.114513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.114705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.114734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.114927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.114953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.115173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.115201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.115369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.115397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.115581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.115606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.115779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.115807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.115975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.116002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.116202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.116228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.116396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.116421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.116605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.116651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.116820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.116846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.117016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.117041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.117227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.117255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.117454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.117479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.117671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.117700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.117893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.117921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.118119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.118144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.118341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.118369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.118535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.118563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.118752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.118778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.118968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.118996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.119151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.119179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.119356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.119385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.119559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.119584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.119786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.119811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.119954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.119979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.120174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.120202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.120392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.120421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.120609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.120641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.120817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.120843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.121032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.121061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.121249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.121275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.121490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.121518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.121685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.121713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.121933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.927 [2024-07-23 06:29:42.121958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.927 qpair failed and we were unable to recover it. 00:33:48.927 [2024-07-23 06:29:42.122156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.122184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.122380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.122409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.122593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.122624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.122777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.122802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.122976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.123001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.123170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.123195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.123364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.123392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.123547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.123575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.123769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.123795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.123987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.124015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.124238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.124266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.124462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.124488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.124674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.124703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.124895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.124923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.125115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.125146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.125307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.125335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.125530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.125557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.125746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.125772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.125966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.125995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.126177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.126205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.126368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.126393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.126579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.126607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.126772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.126800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.126987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.127012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.127206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.127234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.127432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.127460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.127626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.127652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.127837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.127865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.128063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.128091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.128263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.128288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.128435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.128478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.128674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.128703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.928 [2024-07-23 06:29:42.128925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.928 [2024-07-23 06:29:42.128950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.928 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.129178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.129203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.129405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.129431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.129637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.129663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.129889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.129914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.130109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.130136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.130356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.130381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.130576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.130603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.130776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.130804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.130978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.131007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.131225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.131253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.131415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.131444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.131645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.131671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.131870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.131898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.132065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.132092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.132309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.132334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.132528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.132557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.132760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.132786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.133003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.133028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.133228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.133256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.133479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.133507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.133702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.133728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.133919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.133947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.134108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.134137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.134329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.134355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.134554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.134582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.134751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.134779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.134959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.134984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.135217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.135243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.135392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.135417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.135588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.135618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.135790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.135818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.135986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.136014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.136212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.136237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.136406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.136435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.136650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.136680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.136871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.136897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.137090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.929 [2024-07-23 06:29:42.137118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.929 qpair failed and we were unable to recover it. 00:33:48.929 [2024-07-23 06:29:42.137296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.137324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.137487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.137513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.137714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.137741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.137932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.137960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.138164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.138189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.138404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.138432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.138622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.138651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.138844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.138869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.139065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.139093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.139278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.139306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.139505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.139530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.139708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.139734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.139949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.139982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.140205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.140230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.140425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.140454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.140621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.140650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.140825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.140850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.141040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.141068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.141260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.141288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.141490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.141515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.141702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.141731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.141922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.141951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.142124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.142149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.142373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.142401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.142576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.142604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.142805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.142830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.143025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.143055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.143273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.143302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.143528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.143553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.143712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.143740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.143903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.143931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.144125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.144151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.144322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.144366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.144562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.144590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.144758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.144783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.144954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.144982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.145177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.145205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.145376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.930 [2024-07-23 06:29:42.145402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.930 qpair failed and we were unable to recover it. 00:33:48.930 [2024-07-23 06:29:42.145579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.145605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.145793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.145822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.146032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.146057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.146226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.146251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.146474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.146502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.146730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.146756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.146969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.146994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.147214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.147242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.147420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.147446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.147598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.147629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.147809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.147835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.148056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.148081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.148286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.148311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.148541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.148569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.148786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.148811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.148992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.149017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.149187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.149212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.149364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.149389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.149586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.149623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.149820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.149848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.150021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.150046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.150192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.150232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.150419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.150447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.150622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.150649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.150831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.150860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.151042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.151066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.151241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.151267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.151460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.151489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.151711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.151741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.151943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.151969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.152197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.152225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.152414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.152441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.152634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.152660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.152852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.152880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.153072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.153101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.153296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.153321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.153494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.153519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.153676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.153719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.931 qpair failed and we were unable to recover it. 00:33:48.931 [2024-07-23 06:29:42.153918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.931 [2024-07-23 06:29:42.153943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.154121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.154146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.154366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.154393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.154594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.154625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.154800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.154825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.155023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.155051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.155250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.155275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.155444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.155472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.155629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.155658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.155860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.155885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.156048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.156074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.156293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.156321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.156489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.156514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.156684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.156713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.156866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.156894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.157081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.157107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.157314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.157342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.157508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.157536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.157729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.157756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.157946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.157973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.158162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.158190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.158398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.158424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.158594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.158627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.158832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.158857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.159004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.159029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.159182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.159207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.159431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.159459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.159636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.159662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.159856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.159884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.160074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.160102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.160302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.160328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.160500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.160525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.160695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.160721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.160892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.160917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.161141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.161169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.161342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.161370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.161558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.161583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.161766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.161792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.932 [2024-07-23 06:29:42.161982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.932 [2024-07-23 06:29:42.162010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.932 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.162201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.162225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.162444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.162472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.162639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.162667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.162831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.162856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.163055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.163083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.163284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.163309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.163492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.163517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.163706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.163734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.163905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.163932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.164130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.164155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.164355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.164383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.164546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.164574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.164788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.164813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.164974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.165003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.165222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.165247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.165423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.165448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.165640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.165669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.165853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.165881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.166077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.166103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.166305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.166338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.166521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.166549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.166766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.166792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.167015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.167043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.167236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.167262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.167461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.167486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.167706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.167735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.167939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.167967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.168144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.168170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.168322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.168346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.168562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.933 [2024-07-23 06:29:42.168587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.933 qpair failed and we were unable to recover it. 00:33:48.933 [2024-07-23 06:29:42.168739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.168764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.168955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.168983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.169170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.169198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.169370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.169395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.169584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.169618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.169811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.169839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.170035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.170061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.170291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.170317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.170498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.170523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.170716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.170742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.170931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.170959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.171181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.171209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.171419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.171444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.171645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.171673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.171842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.171871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.172067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.172092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.172284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.172317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.172503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.172531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.172731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.172757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.172977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.173005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.173230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.173255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.173431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.173456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.173681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.173709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.173897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.173925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.174102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.174127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.174272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.174297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.174466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.174491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.174632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.174658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.174824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.174852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.175022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.175049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.175238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.175263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.175411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.175437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.175587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.175618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.175798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.175823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.175993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.176021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.176238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.176266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.176443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.176468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.934 qpair failed and we were unable to recover it. 00:33:48.934 [2024-07-23 06:29:42.176668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.934 [2024-07-23 06:29:42.176694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.176895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.176923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.177139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.177164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.177354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.177382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.177574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.177602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.177769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.177794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.177949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.177989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.178185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.178213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.178413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.178438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.178660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.178689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.178869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.178894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.179064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.179089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.179308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.179335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.179527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.179555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.179740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.179766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.179959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.179987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.180180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.180208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.180364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.180390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.180578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.180606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.180806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.180835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.181030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.181056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.181261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.181289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.181488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.181514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.181652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.181678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.181828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.181853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.182003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.182029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.182224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.182250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.182452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.182480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.182688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.182714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.182886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.182911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.183080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.183105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.183304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.183332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.183500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.183525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.183744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.183773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.183972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.184001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.184197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.184222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.184419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.184444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.184641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.184670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.935 qpair failed and we were unable to recover it. 00:33:48.935 [2024-07-23 06:29:42.184844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.935 [2024-07-23 06:29:42.184870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.185077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.185128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.185318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.185346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.185511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.185536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.185732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.185761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.185943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.185969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.186165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.186191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.186376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.186401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.186621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.186649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.186871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.186900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.187103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.187131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.187323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.187351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.187542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.187567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.187739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.187765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.187914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.187939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.188117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.188142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.188337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.188363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.188576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.188601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.188783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.188808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.188983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.189009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.189191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.189219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.189413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.189438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.189629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.189658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.189854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.189881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.190110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.190135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.190358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.190386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.190599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.190632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.190841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.190872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.191096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.191125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.191362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.191390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.191585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.191610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.191787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.191815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.192003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.192030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.192226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.192251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.192447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.192475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.192696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.192725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.192898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.192928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.193108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.193133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.193310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.193336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.936 [2024-07-23 06:29:42.193486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.936 [2024-07-23 06:29:42.193511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.936 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.193711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.193737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.193941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.193969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.194159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.194185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.194360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.194389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.194580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.194608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.194839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.194865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.195041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.195070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.195267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.195292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.195468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.195493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.195689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.195717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.195901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.195927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.196102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.196127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.196325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.196350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.196575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.196603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.196783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.196808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.196960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.196986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.197176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.197205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.197382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.197407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.197607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.197637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.197804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.197830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.197974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.197998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.198175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.198200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.198392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.198420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.198639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.198669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.198875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.198903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.199089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.199117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.199311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.199336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.199496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.199523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.199705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.199731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.199875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.199900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.200051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.200077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.200293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.200321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.200492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.200519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.200685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.200714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.200906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.200934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.201108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.201133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.201329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.201357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.201554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.201582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.201788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.201813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.202007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.202035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.202240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.937 [2024-07-23 06:29:42.202267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.937 qpair failed and we were unable to recover it. 00:33:48.937 [2024-07-23 06:29:42.202411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.202437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.202589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.202637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.202801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.202830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.203000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.203025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.203226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.203252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.203431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.203460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.203686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.203713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.203913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.203942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.204125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.204153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.204343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.204369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.204534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.204563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.204763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.204789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.204927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.204952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.205178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.205206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.205410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.205435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.205654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.205698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.205839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.205865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.206041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.206067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.206218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.206243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.206417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.206459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.206651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.206678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.206828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.206853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.207076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.207104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.207286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.207312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.207477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.207505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.207695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.207724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.207892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.207918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.208118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.208143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.938 [2024-07-23 06:29:42.208318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.938 [2024-07-23 06:29:42.208346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.938 qpair failed and we were unable to recover it. 00:33:48.939 [2024-07-23 06:29:42.208524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.939 [2024-07-23 06:29:42.208550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.939 qpair failed and we were unable to recover it. 00:33:48.939 [2024-07-23 06:29:42.208749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.939 [2024-07-23 06:29:42.208778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.939 qpair failed and we were unable to recover it. 00:33:48.939 [2024-07-23 06:29:42.208969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.939 [2024-07-23 06:29:42.208997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.939 qpair failed and we were unable to recover it. 00:33:48.939 [2024-07-23 06:29:42.209211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.939 [2024-07-23 06:29:42.209238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.939 qpair failed and we were unable to recover it. 00:33:48.939 [2024-07-23 06:29:42.209454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.939 [2024-07-23 06:29:42.209483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.939 qpair failed and we were unable to recover it. 00:33:48.939 [2024-07-23 06:29:42.209643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.939 [2024-07-23 06:29:42.209675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.939 qpair failed and we were unable to recover it. 00:33:48.939 [2024-07-23 06:29:42.209872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.939 [2024-07-23 06:29:42.209898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:48.939 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.210102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.210132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.210360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.210389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.210566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.210592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.210789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.210817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.211035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.211063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.211225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.211250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.211471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.211499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.211693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.211722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.211891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.211917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.212083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.212110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.212302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.212330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.212529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.212555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.212725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.212754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.212956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.212982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.213122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.213155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.213346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.213373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.213545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 06:29:42.213573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.219 qpair failed and we were unable to recover it. 00:33:49.219 [2024-07-23 06:29:42.213776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.213802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.213994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.214022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.214196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.214224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.214397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.214422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.214625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.214651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.214801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.214826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.215028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.215053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.215248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.215276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.215470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.215497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.216329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.216361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.216588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.216634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.216841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.216870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.217056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.217081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.217246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.217274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.217463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.217492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.217682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.217709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.217881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.217910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.218073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.218101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.218270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.218296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.218457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.218485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.218666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.218692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.218869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.218894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.219065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.219090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.219323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.219348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.219494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.219525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.219720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.219749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.219936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.219964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.220158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.220185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.220379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.220407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.220598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.220646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.220823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.220848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.221051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.221079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.221265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.221293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.221462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.221488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.221637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.221679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.221825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.221850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.222009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.222034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.220 [2024-07-23 06:29:42.222214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 06:29:42.222239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.220 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.222399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.222442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.222676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.222702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.222847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.222872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.223056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.223084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.223277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.223302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.223443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.223468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.223611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.223643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.223832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.223858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.224032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.224058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.224270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.224295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.224443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.224468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.224661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.224689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.224882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.224910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.225115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.225140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.225361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.225389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.225575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.225603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.225819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.225844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.226044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.226072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.226238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.226266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.226492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.226517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.226732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.226758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.226933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.226958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.227106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.227131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.227303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.227329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.227502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.227530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.227700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.227727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.227921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.227949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.228147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.228172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.228347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.228372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.228568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.228597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.228842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.228871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.229056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.229082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.229248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.229276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.229462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.229490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.229675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.229701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.229881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.229909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.230099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.230124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.230287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.221 [2024-07-23 06:29:42.230312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.221 qpair failed and we were unable to recover it. 00:33:49.221 [2024-07-23 06:29:42.230488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.230516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.230683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.230711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.230881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.230907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.231078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.231106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.231271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.231300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.231494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.231520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.231673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.231699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.231847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.231888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.232082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.232107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.232273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.232301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.232461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.232490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.232691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.232717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.232885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.232913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.233098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.233127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.233288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.233313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.233507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.233536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.233748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.233778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.233957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.233982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.234177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.234205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.234396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.234421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.234598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.234631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.234775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.234800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.234973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.234998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.235174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.235199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.235375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.235400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.235628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.235657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.235834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.235860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.236007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.236035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.236204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.236230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.236409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.236440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.236596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.236636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.236803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.236831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.237012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.237038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.237217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.237242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.237416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.237441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.237608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.237641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.237816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.237841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.238013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.222 [2024-07-23 06:29:42.238041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.222 qpair failed and we were unable to recover it. 00:33:49.222 [2024-07-23 06:29:42.238241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.238266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.238424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.238453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.238606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.238642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.238810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.238836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.239033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.239061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.239259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.239291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.239467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.239492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.239718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.239747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.239915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.239942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.240164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.240189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.240423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.240451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.240608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.240643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.240824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.240849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.241070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.241098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.241265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.241295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.241511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.241536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.241741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.241766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.241919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.241944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.242151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.242176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.242376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.242404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.242575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.242603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.242813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.242839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.243008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.243037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.243252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.243280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.243483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.243508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.243718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.243747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.243934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.243962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.244130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.244155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.244372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.244400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.244598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.244646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.244821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.244847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.245015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.245043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.245238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.245269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.245462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.245488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.245633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.245666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.245861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.245889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.246088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.246113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.246267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.223 [2024-07-23 06:29:42.246292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.223 qpair failed and we were unable to recover it. 00:33:49.223 [2024-07-23 06:29:42.246468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.246493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.246647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.246673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.246820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.246864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.247015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.247044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.247242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.247267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.247469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.247497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.247663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.247692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.247856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.247882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.248076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.248105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.248327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.248353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.248505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.248530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.248668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.248694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.248841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.248882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.249064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.249089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.249310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.249339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.249531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.249559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.249756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.249785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.249979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.250007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.250169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.250197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.250381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.250407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.250582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.250608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.250765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.250809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.250981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.251005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.251223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.251252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.251445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.251474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.251680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.251707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.251864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.251890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.252029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.252070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.252256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.252281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.252480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.224 [2024-07-23 06:29:42.252515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.224 qpair failed and we were unable to recover it. 00:33:49.224 [2024-07-23 06:29:42.252712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.252741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.252962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.252987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.253167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.253194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.253349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.253377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.253597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.253631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.253860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.253905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.254087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.254114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.254312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.254341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.254507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.254536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.254724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.254751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.254930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.254956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.255120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.255147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.255375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.255401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.255571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.255597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.255801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.255843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.256042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.256072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.256237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.256263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.256436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.256487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.256691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.256721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.256907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.256933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.257110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.257136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.257309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.257334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.257501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.257525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.257686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.257716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.257882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.257909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.258099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.258124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.258269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.258296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.258517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.258545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.258720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.258745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.258912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.258940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.259129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.259157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.259318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.259343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.259567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.259595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.259800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.259826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.259981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.260006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.260177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.260203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.260420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.260448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.260643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.260670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.225 qpair failed and we were unable to recover it. 00:33:49.225 [2024-07-23 06:29:42.260852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.225 [2024-07-23 06:29:42.260877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.261073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.261102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.261297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.261322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.261517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.261544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.261703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.261731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.261912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.261938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.262133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.262160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.262348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.262376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.262554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.262581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.262798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.262826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.262984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.263012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.263208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.263233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.263422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.263450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.263642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.263670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.263862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.263887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.264060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.264089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.264253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.264281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.264469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.264494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.264665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.264694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.264855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.264885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.265124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.265150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.265314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.265347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.265548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.265573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.265757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.265783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.265947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.265977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.266185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.266210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.266388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.266414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.266579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.266607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.266795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.266821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.266976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.267001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.267147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.267173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.267343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.267370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.267567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.267593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.267768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.267793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.267947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.267973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.268112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.268137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.268289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.268330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.268550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.268578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.226 [2024-07-23 06:29:42.268770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.226 [2024-07-23 06:29:42.268797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.226 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.268962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.268989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.269189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.269215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.269393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.269419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.269569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.269594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.269778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.269804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.269978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.270004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.270213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.270242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.270412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.270442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.270622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.270648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.270804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.270833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.271006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.271032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.271212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.271237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.271381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.271406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.271594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.271631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.271797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.271823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.271998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.272023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.272197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.272223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.272397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.272423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.272612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.272649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.272844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.272869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.273046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.273071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.273271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.273299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.273458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.273487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.273660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.273686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.273824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.273851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.274048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.274073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.274239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.274265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.274419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.274458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.274658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.274684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.274841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.274867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.275046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.275071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.275213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.275238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.275405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.275430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.275594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.275625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.275784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.275809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.275950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.275975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.276126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.276154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.227 [2024-07-23 06:29:42.276302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.227 [2024-07-23 06:29:42.276327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.227 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.276473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.276498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.276649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.276674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.276821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.276845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.277028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.277054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.277201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.277226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.277375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.277400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.277572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.277598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.277746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.277772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.277919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.277945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.278120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.278145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.278311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.278339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.278543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.278569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.278734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.278760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.278903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.278944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.279139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.279165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.279307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.279332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.279501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.279526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.279734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.279760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.279920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.279945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.280141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.280171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.280335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.280361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.280514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.280539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.280698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.280724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.280905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.280932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.281078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.281104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.281286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.281325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.281531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.281563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.281749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.281776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.281927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.281954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.282091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.282125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.282297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.282334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.282524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.228 [2024-07-23 06:29:42.282552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.228 qpair failed and we were unable to recover it. 00:33:49.228 [2024-07-23 06:29:42.282734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.282761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.282908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.282934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.283076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.283101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.283295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.283324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.283486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.283512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.283659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.283685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.283831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.283856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.284025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.284055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.284246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.284274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.284444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.284470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.284633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.284660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.284839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.284864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.285025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.285053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.285272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.285297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.285466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.285491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.285651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.285677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.285851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.285877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.286027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.286052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.286204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.286230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.286430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.286464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.286632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.286662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.286820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.286852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.287037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.287070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.287300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.287330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.287557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.287592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.287754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.287787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.287985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.288022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.288243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.288271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.288470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.288495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.288706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.288733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.288884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.288927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.289118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.289143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.289341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.289370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.289595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.289632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.290449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.290485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.290717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.290744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.290937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.290967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.291161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.229 [2024-07-23 06:29:42.291186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.229 qpair failed and we were unable to recover it. 00:33:49.229 [2024-07-23 06:29:42.291341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.291366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.291564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.291590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.291790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.291815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.292014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.292042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.292216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.292242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.292393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.292419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.292594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.292640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.292819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.292845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.293018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.293044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.293225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.293254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.293451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.293478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.293658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.293684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.293840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.293865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.294071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.294099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.294314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.294339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.294500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.294528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.294729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.294756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.294910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.294936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.295129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.295159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.295311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.295339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.295538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.295564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.295745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.295770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.295988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.296016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.296204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.296233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.296453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.296481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.296682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.296708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.296854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.296879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.297052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.297080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.297267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.297295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.297495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.297520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.297706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.297734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.297941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.297966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.298159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.298184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.298369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.298398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.298627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.298667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.298823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.298848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.299039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.299068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.299275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.299303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.230 [2024-07-23 06:29:42.299527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.230 [2024-07-23 06:29:42.299560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.230 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.299782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.299811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.299990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.300027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.300225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.300261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.300481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.300512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.300688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.300718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.300892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.300917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.301095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.301122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.301306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.301343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.301584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.301635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.301796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.301821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.302040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.302069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.302267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.302304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.302571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.302598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.302793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.302819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.303015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.303041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.303234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.303277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.303478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.303515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.303693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.303719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.303890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.303920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.304101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.304135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.304308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.304345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.304577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.304603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.304788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.304814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.304989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.305014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.305238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.305267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.305498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.305527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.305713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.305739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.305909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.305937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.306106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.306133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.306349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.306375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.306569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.306598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.306776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.306804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.306977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.307003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.307161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.307189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.307356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.307384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.307582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.307608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.307774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.307800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.231 [2024-07-23 06:29:42.308015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.231 [2024-07-23 06:29:42.308045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.231 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.308220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.308245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.308399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.308440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.308654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.308685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.308831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.308856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.309049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.309085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.309289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.309325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.309523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.309549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.309717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.309743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.309896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.309924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.310139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.310176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.310387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.310425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.310603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.310642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.310826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.310852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.311063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.311105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.311318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.311353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.311531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.311558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.311709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.311736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.311895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.311945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.312196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.312232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.312430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.312458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.312665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.312692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.312840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.312886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.313127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.313162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.313345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.313373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.313547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.313573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.313758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.313788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.313977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.314014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.314232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.314270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.314486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.314515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.314677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.314706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.314884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.314910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.315088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.315130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.315320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.315346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.315549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.315574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.315742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.315767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.315909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.315951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.316147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.316172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.316395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.232 [2024-07-23 06:29:42.316423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.232 qpair failed and we were unable to recover it. 00:33:49.232 [2024-07-23 06:29:42.316592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.316643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.316840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.316865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.317041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.317067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.317258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.317291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.317489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.317514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.317714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.317744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.317911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.317939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.318112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.318138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.318337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.318366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.318528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.318556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.318758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.318785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.318935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.318961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.319103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.319128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.319298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.319323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.319500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.319525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.319679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.319724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.319901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.319927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.320090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.320118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.320312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.320340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.320561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.320586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.320771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.320800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.320970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.320998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.321199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.321224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.321397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.321425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.321611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.321645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.321824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.321850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.322062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.322087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.322271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.322298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.322458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.322483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.322715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.322743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.322912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.322939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.323116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.323141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.233 [2024-07-23 06:29:42.323362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.233 [2024-07-23 06:29:42.323391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.233 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.323570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.323598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.323784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.323811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.324022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.324051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.324208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.324237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.324453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.324479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.324660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.324689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.324861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.324888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.325084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.325110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.325336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.325365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.325530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.325559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.325794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.325821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.326061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.326089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.326324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.326349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.326520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.326545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.326735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.326763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.326996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.327021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.327213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.327238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.327412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.327440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.327636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.327673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.327844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.327870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.328043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.328068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.328263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.328291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.328481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.328507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.328675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.328704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.328885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.328910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.329071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.329096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.329279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.329304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.329519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.329547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.329766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.329792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.329963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.329992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.330157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.330185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.330384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.330410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.330583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.330622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.330822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.330850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.331079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.331104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.331297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.331325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.331483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.331510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.234 [2024-07-23 06:29:42.331701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.234 [2024-07-23 06:29:42.331727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.234 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.331892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.331923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.332093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.332130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.332337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.332363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.333296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.333329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.333571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.333599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.333824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.333850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.334064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.334092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.334292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.334317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.334501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.334527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.334679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.334705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.334859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.334885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.335037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.335063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.335251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.335279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.335469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.335498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.335700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.335727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.335924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.335952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.336140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.336168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.336362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.336387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.336566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.336594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.336817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.336846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.337016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.337042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.337245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.337270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.337459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.337487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.337664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.337690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.337842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.337867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.338065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.338094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.338317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.338343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.338565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.338598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.338805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.338836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.339075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.339117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.339308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.339353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.339550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.339576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.339767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.339794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.339980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.340033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.340260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.340303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.340482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.340525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.340684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.340711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.340861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.235 [2024-07-23 06:29:42.340886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.235 qpair failed and we were unable to recover it. 00:33:49.235 [2024-07-23 06:29:42.341090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.341132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.341330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.341373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.341548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.341573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.341753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.341780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.341981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.342025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.342224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.342268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.342516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.342559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.342743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.342769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.342936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.342979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.343153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.343198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.343425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.343469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.344451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.344481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.344690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.344717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.344914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.344943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.345132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.345157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.345346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.345371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.345526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.345551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.345740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.345766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.345961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.346011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.346201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.346247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.346399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.346424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.346599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.346630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.346802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.346828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.347009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.347034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.347226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.347255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.347462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.347487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.347642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.347668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.347840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.347866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.348067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.348111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.348262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.348292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.348494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.348520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.348686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.348712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.348930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.348958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.349145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.349187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.349339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.349365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.349550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.349575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.349738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.349763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.236 [2024-07-23 06:29:42.349911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.236 [2024-07-23 06:29:42.349936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.236 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.350107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.350150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.350352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.350381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.350549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.350574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.350741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.350769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.350934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.350978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.351182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.351227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.351404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.351429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.351601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.351641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.351813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.351858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.352064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.352090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.352316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.352360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.353096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.353126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.353360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.353404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.353586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.353619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.353788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.353813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.353989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.354033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.354261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.354304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.354453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.354479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.354644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.354672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.354878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.354907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.355128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.355170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.355402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.355447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.355629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.355655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.355797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.355824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.356002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.356029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.356233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.356258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.356409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.356435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.356584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.356609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.356819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.356847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.357029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.357072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.357242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.357285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.357434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.357463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.357672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.357717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.357892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.357920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.358137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.358180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.358330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.358356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.237 [2024-07-23 06:29:42.358533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.237 [2024-07-23 06:29:42.358559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.237 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.358731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.358757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.358953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.358996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.359171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.359214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.359389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.359415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.359597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.359629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.359849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.359892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.360091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.360135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.360358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.360402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.360552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.360579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.360794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.360838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.361034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.361062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.361276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.361318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.361473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.361499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.361700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.361726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.361886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.361912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.362129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.362157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.362356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.362382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.362566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.362592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.362798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.362843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.363051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.363080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.363241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.363268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.363475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.363508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.363713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.363739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.363912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.363955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.364191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.364234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.364415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.364460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.364603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.364637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.364838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.364863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.365066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.365113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.365312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.365355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.365528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.365553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.238 qpair failed and we were unable to recover it. 00:33:49.238 [2024-07-23 06:29:42.365729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.238 [2024-07-23 06:29:42.365755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.365930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.365973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.366152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.366197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.366445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.366497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.366641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.366667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.366842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.366891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.367119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.367164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.367339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.367364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.367518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.367544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.367724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.367768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.367973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.368015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.368252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.368298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.368497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.368522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.368714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.368758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.368958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.369001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.369195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.369224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.369435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.369460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.369662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.369706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.369934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.369977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.370190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.370234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.370439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.370465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.370663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.370706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.370879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.370922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.371127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.371170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.371371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.371397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.371594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.371626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.371839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.371866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.372071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.372114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.372325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.372368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.372516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.372542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.372754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.372799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.372971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.373013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.373274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.373320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.373496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.373523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.373706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.373750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.373955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.373998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.374163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.374206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.374390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.239 [2024-07-23 06:29:42.374415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.239 qpair failed and we were unable to recover it. 00:33:49.239 [2024-07-23 06:29:42.374590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.374622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.374831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.374858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.375060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.375103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.375349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.375399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.375573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.375598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.375801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.375845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.376050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.376093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.376254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.376297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.376449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.376474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.376650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.376677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.376870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.376898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.377092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.377134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.377309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.377334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.377517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.377544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.377739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.377782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.377948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.377990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.378196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.378239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.378395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.378420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.378598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.378630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.378806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.378848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.379077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.379106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.379285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.379329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.379478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.379504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.379681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.379725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.379919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.379962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.380183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.380226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.380371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.380396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.380569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.380594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.380830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.380872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.381063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.381106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.381244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.381270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.381472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.381498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.381660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.381693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.381888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.381931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.382106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.382151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.382355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.382380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.382579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.382605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.382832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.240 [2024-07-23 06:29:42.382858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.240 qpair failed and we were unable to recover it. 00:33:49.240 [2024-07-23 06:29:42.383072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.383099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.383296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.383340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.383550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.383576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.383810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.383854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.384060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.384103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.384346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.384396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.384599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.384631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.384791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.384817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.385021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.385064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.385236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.385279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.385455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.385480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.385684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.385727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.385956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.385999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.386200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.386244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.386448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.386473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.386671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.386701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.386907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.386936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.387147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.387175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.387400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.387443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.387621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.387648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.387847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.387890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.388067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.388110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.388277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.388321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.388527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.388553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.388769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.388798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.388982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.389027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.389252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.389295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.389472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.389498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.389721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.389765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.389965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.390009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.390199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.390242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.390384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.390410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.390630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.390657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.390820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.390863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.391073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.391118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.391322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.391365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.391563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.391589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.391801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.241 [2024-07-23 06:29:42.391844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.241 qpair failed and we were unable to recover it. 00:33:49.241 [2024-07-23 06:29:42.392042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.392086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.392288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.392331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.392481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.392506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.392680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.392724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.392932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.392958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.393159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.393201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.393351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.393377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.393554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.393580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.393785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.393828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.394039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.394083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.394293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.394335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.394535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.394560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.394732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.394776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.395012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.395055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.395281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.395325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.395537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.395562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.395764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.395808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.396009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.396051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.396254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.396297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.396498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.396524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.396723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.396767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.397002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.397045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.397283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.397325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.397504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.397530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.397723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.397766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.397941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.397968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.398193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.398236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.398435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.398460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.398663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.398698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.398903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.398946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.399170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.399212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.399413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.399457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.242 [2024-07-23 06:29:42.399608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.242 [2024-07-23 06:29:42.399642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.242 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.399819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.399844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.400054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.400081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.400270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.400313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.400512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.400544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.400765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.400808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.401034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.401077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.401297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.401339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.401516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.401542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.401745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.401788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.401984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.402027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.402261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.402303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.402450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.402475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.402664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.402693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.402908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.402951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.403124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.403167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.403337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.403365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.403554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.403579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.403785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.403830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.404036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.404080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.404275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.404318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.404474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.404501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.404695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.404739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.404970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.405013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.405191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.405235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.405391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.405417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.405618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.405644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.405785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.405812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.405973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.406016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.406207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.406249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.406396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.406422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.406599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.406642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.406837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.406866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.407071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.407115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.407278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.407321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.407495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.407520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.407694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.407738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.407936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.243 [2024-07-23 06:29:42.407980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.243 qpair failed and we were unable to recover it. 00:33:49.243 [2024-07-23 06:29:42.408208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.408250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.408402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.408429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.408601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.408633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.408826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.408870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.409055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.409098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.409300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.409343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.409548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.409578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.409758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.409801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.409972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.410016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.410209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.410253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.410432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.410458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.410676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.410720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.410933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.410976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.411203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.411246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.411399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.411424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.411600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.411632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.411797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.411840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.412039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.412082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.412278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.412307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.412526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.412552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.412728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.412755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.412955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.412998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.413166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.413208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.413399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.413441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.413623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.413650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.413825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.413851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.414048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.414091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.414320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.414363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.414543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.414569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.414756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.414782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.414979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.415021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.415239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.415281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.415477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.415526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.415731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.415775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.415959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.415988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.416150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.416178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.416368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.416396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.416622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.244 [2024-07-23 06:29:42.416648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.244 qpair failed and we were unable to recover it. 00:33:49.244 [2024-07-23 06:29:42.416785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.416810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.416976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.417004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.417197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.417225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.417439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.417485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.417698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.417724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.417894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.417921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.418138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.418167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.418382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.418410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.418570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.418596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.418787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.418813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.419012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.419041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.419302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.419352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.419539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.419568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.419778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.419806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.419961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.419986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.420176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.420205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.420386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.420414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.420578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.420603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.420811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.420837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.421028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.421054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.421252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.421281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.421463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.421491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.421677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.421704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.421860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.421886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.422086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.422135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.422307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.422335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.422550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.422579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.422745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.422772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.422971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.423000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.423184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.423231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.423386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.423415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.423609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.423646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.423807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.423834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.424062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.424091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.424247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.424275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.424497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.424526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.424738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.424765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.424921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.424946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.245 [2024-07-23 06:29:42.425136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.245 [2024-07-23 06:29:42.425164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.245 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.425379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.425407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.425569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.425594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.425750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.425776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.425976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.426004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.426195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.426223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.426443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.426490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.426673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.426699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.426847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.426873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.427020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.427045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.427235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.427263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.427477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.427509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.427751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.427780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.427972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.428001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.428212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.428240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.428455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.428483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.428668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.428694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.428842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.428867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.429006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.429031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.429193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.429222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.429424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.429449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.429591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.429622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.429849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.429878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.430041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.430070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.430270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.430295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.430495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.430524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.430688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.430718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.430918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.430944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.431121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.431146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.431338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.431363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.431571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.431596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.431782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.431811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.431961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.431989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.432215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.432241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.432415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.432440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.432643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.432672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.432837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.432862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.433065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.433093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.246 [2024-07-23 06:29:42.433319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.246 [2024-07-23 06:29:42.433351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.246 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.433550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.433575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.433743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.433772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.433991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.434019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.434190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.434215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.434437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.434465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.434643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.434672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.434835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.434860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.435036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.435092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.435317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.435342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.435576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.435604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.435778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.435803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.436024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.436052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.436226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.436252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.436428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.436456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.436647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.436676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.436876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.436901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.437113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.437141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.437359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.437387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.437579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.437604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.437779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.437806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.437984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.438009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.438151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.438176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.438339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.438366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.438559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.438587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.438779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.438804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.438969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.438997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.439187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.439219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.439412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.439437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.439623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.439651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.439818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.439845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.440030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.440054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.440230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.440255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.247 [2024-07-23 06:29:42.440435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.247 [2024-07-23 06:29:42.440460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.247 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.440660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.440687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.440884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.440911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.441100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.441128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.441297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.441322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.441480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.441509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.441705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.441733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.441929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.441954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.442128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.442157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.442352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.442381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.442575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.442600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.442838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.442867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.443058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.443087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.443285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.443310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.443528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.443556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.443731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.443757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.443930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.443955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.444196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.444245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.444447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.444474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.444668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.444695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.444876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.444901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.445076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.445101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.445317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.445343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.445535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.445564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.445795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.445821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.445998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.446023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.446223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.446251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.446455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.446481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.446655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.446681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.446898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.446926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.447112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.447140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.447335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.447360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.447554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.447582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.447754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.447780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.447960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.447985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.448141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.448167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.448372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.448397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.448551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.448576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.448759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.248 [2024-07-23 06:29:42.448785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.248 qpair failed and we were unable to recover it. 00:33:49.248 [2024-07-23 06:29:42.448978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.449006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.449225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.449249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.449461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.449488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.449681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.449710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.449897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.449922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.450124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.450149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.450338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.450366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.450629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.450671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.450842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.450868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.451069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.451097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.451297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.451323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.451478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.451504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.451680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.451706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.451846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.451871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.452016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.452058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.452278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.452304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.452452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.452478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.452681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.452710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.452890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.452918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.453096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.453121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.453271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.453297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.453465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.453490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.453636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.453662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.453855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.453887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.454077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.454105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.454304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.454329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.454549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.454578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.454768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.454795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.454947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.454973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.455148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.455173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.455323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.455348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.455523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.455549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.455694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.455720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.455869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.455913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.456111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.456137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.456288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.456313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.456512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.456541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.456732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.249 [2024-07-23 06:29:42.456758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.249 qpair failed and we were unable to recover it. 00:33:49.249 [2024-07-23 06:29:42.456909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.456951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.457143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.457171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.457360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.457386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.457554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.457582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.457807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.457833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.457997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.458022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.458169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.458194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.458375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.458400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.458550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.458575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.458736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.458762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.458978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.459007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.459212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.459237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.459387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.459417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.459582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.459610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.459794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.459819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.460020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.460048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.460239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.460267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.460491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.460516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.460716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.460915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.460944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.461130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.461156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.461359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.461386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.461579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.461606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.461790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.461815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.462017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.462071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.462302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.462331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.462506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.462531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.462693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.462720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.462914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.462942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.463146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.463171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.463344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.463369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.463586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.463620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.463783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.463808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.464004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.464033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.464218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.464243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.464415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.464440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.464590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.464625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.464803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.464830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.250 qpair failed and we were unable to recover it. 00:33:49.250 [2024-07-23 06:29:42.464980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.250 [2024-07-23 06:29:42.465006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.465205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.465233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.465457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.465485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.465656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.465682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.465851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.465877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.466037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.466079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.466277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.466302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.466497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.466522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.466774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.466803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.466981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.467006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.467202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.467251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.467436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.467464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.467628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.467654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.467816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.467845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.468033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.468061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.468266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.468291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.468487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.468516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.468706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.468733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.468934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.468960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.469151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.469180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.469365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.469394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.469559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.469585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.469783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.469812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.469976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.470005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.470196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.470221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.470403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.470431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.470625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.470651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.470824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.470850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.471014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.471043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.471242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.471267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.471440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.471465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.471644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.471671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.471908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.471936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.251 qpair failed and we were unable to recover it. 00:33:49.251 [2024-07-23 06:29:42.472123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.251 [2024-07-23 06:29:42.472148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.472387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.472437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.472628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.472657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.472834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.472859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.473090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.473118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.473308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.473337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.473563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.473588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.473798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.473826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.474021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.474050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.474257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.474285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.474446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.474475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.474670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.474699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.474895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.474921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.475125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.475154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.475345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.475374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.475600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.475633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.475817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.475842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.476020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.476045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.476218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.476244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.476462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.476491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.476716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.476742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.476920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.476945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.477141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.477169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.477392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.477420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.477645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.477670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.477846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.477873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.478090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.478118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.478284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.478309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.478472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.478499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.478688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.478717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.478912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.478938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.479137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.479165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.479368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.479393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.479588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.479619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.479823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.479851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.480040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.480069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.480244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.480275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.480466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.480494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.252 [2024-07-23 06:29:42.480682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.252 [2024-07-23 06:29:42.480711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.252 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.480885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.480910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.481070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.481098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.481317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.481346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.481517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.481542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.481720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.481749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.481948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.481976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.482174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.482199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.482388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.482416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.482581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.482609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.482802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.482827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.482996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.483021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.483217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.483245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.483416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.483441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.483629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.483658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.483847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.483875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.484032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.484057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.484272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.484299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.484480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.484508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.484685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.484712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.484905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.484933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.485128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.485156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.485351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.485376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.485606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.485641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.485801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.485829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.485999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.486028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.486172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.486198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.486422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.486450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.486652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.486679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.486852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.486881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.487077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.487102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.487278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.487304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.487491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.487520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.487683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.487709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.487858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.487884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.488060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.488086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.488237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.488263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.488406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.488432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.488634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.253 [2024-07-23 06:29:42.488661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.253 qpair failed and we were unable to recover it. 00:33:49.253 [2024-07-23 06:29:42.488828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.488868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.489053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.489080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.489252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.489278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.489473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.489502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.489712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.489739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.489929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.489957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.490117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.490146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.490340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.490369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.490557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.490585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.490755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.490780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.490982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.491011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.491323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.491377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.491621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.491646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.491800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.491826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.492054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.492082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.492302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.492352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.492515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.492543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.492765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.492791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.493007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.493035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.493335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.493385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.493661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.493704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.493882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.493924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.494138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.494166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.494389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.494442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.494633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.494658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.494835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.494861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.495082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.495107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.495420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.495472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.495670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.495696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.495871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.495896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.496094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.496122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.496399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.496449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.496665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.496691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.496914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.496942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.497154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.497179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.497376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.497402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.497601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.497640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.497817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.497844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.254 qpair failed and we were unable to recover it. 00:33:49.254 [2024-07-23 06:29:42.498034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.254 [2024-07-23 06:29:42.498062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.498248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.498273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.498443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.498468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.498701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.498729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.498901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.498929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.499106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.499131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.499309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.499334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.499555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.499583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.499800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.499825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.500004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.500029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.500177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.500202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.500391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.500418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.500588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.500625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.500855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.500880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.501052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.501077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.501234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.501259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.501448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.501480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.501648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.501675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.501845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.501870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.502073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.502101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.502269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.502298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.502492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.502517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.502683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.502718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.502879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.502904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.503102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.503127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.503333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.503358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.503531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.503556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.503701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.503727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.503881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.503907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.504081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.504106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.504278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.504303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.504469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.504495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.504695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.504724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.504901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.504926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.505094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.505120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.505292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.505318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.505487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.505517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.505736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.505762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.255 [2024-07-23 06:29:42.506741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.255 [2024-07-23 06:29:42.506776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.255 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.506969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.506998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.507225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.507253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.507462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.507488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.507669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.507696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.507912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.507946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.508123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.508151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.508325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.508350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.508526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.508551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.508702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.508728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.508901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.508926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.509107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.509132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.509305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.509330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.509528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.509556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.509756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.509782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.509957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.509983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.510158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.510183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.510384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.510409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.510596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.510633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.510847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.510872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.511042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.511068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.511245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.511270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.511431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.511459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.511655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.511680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.511854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.511880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.512108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.512136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.512287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.512315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.512509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.512534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.512681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.512707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.512877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.512912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.513104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.513129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.513302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.513327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.513499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.513528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.256 qpair failed and we were unable to recover it. 00:33:49.256 [2024-07-23 06:29:42.513677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.256 [2024-07-23 06:29:42.513721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.513935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.513963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.514133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.514159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.514308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.514334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.514529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.514557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.514748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.514774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.514949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.514975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.515171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.515196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.515385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.515413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.515609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.515649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.515813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.515838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.516035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.516060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.516260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.516288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.516459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.516487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.516662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.516689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.516888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.516914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.517113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.517141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.517297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.517325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.517500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.517525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.517720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.517746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.517922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.517950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.518115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.518143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.518338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.518364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.518535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.518560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.518795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.518821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.519003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.519028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.519180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.519204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.519388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.519414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.519563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.519588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.519741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.519767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.519916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.519941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.520109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.520133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.520331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.520356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.520491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.520516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.520691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.520717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.520860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.520886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.521094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.521122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.521311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.521338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.257 [2024-07-23 06:29:42.521502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.257 [2024-07-23 06:29:42.521527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.257 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.521677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.521703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.521851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.521879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.522101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.522129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.522331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.522356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.522530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.522555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.522734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.522763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.522988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.523014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.523192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.523218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.523415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.523440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.523661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.523689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.523899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.523928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.524135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.524160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.524356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.524381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.524536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.524562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.524770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.524796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.524947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.524973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.525112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.525138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.525329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.525356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.525579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.525606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.525808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.525833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.526016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.526041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.526254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.526282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.526450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.526479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.526708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.526734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.526890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.526915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.527117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.527142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.527358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.527383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.527535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.527561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.527777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.527807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.528032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.528060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.528224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.528253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.528444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.528469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.528640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.528666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.528839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.528868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.529019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.529048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.529273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.529299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.529474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.529500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.529699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.258 [2024-07-23 06:29:42.529728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.258 qpair failed and we were unable to recover it. 00:33:49.258 [2024-07-23 06:29:42.529892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.529928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.530130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.530155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.530325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.530350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.530525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.530550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.530731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.530756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.530902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.530927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.531097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.531122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.531291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.531319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.531523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.531550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.531749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.531774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.531954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.531979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.532125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.532159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.532355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.532382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.532572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.532598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.532780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.532805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.532971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.533000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.533187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.533215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.533391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.533420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.533596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.533627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.533854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.533882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.534050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.534078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.534277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.534309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.534509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.534535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.534774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.534802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.534956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.534982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.535130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.535155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.535298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.535323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.535495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.535523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.535728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.535754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.535902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.535927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.536103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.536128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.536342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.536370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.536561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.536587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.536771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.536798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.536971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.536998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.537204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.537230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.537451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.537480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.537683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.537712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.259 [2024-07-23 06:29:42.537931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.259 [2024-07-23 06:29:42.537957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.259 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.538134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.538160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.538380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.538408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.538586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.538623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.538803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.538828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.539024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.539052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.539244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.539272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.539457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.539482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.539661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.539686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.539875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.539900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.540095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.540122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.260 [2024-07-23 06:29:42.540305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.260 [2024-07-23 06:29:42.540334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.260 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.543630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.543669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.543876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.543909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.544082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.544110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.544302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.544332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.544749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.544779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.544990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.545020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.545218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.545249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.545450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.545480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.545695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.545724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.545906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.545936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.546123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.546152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.546366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.546394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.546594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.546627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.546807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.546835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.547021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.547049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.547230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.547257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.547407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.547433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.547628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.547655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.547807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.547833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.547987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.548014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.548166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.548191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.548368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.548393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.548563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.548589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.548757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.548783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.548930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.548955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.549130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.549155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.549325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.549350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.549531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.549556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.549723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.549750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.549890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.549915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.550088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.550113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.550309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.550334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.550476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.550502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.550686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.544 [2024-07-23 06:29:42.550714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.544 qpair failed and we were unable to recover it. 00:33:49.544 [2024-07-23 06:29:42.550866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.550892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.551063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.551093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.551267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.551292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.551436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.551461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.551678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.551703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.551885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.551922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.552135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.552160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.552379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.552407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.552602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.552668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.552824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.552849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.552994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.553019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.553163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.553188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.553361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.553389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.553554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.553582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.553804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.553830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.554051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.554079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.554252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.554294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.554458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.554487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.554656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.554682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.554822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.554847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.555001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.555045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.555205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.555233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.555407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.555432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.555576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.555601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.555796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.555822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.555970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.555996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.556162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.556187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.556346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.556374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.556594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.556641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.556805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.556830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.556978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.557003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.557149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.557193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.557399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.557427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.557591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.557629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.557809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.557836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.558043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.558071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.558249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.558274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.545 [2024-07-23 06:29:42.558413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.545 [2024-07-23 06:29:42.558438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.545 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.558595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.558627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.558814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.558839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.558986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.559012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.559167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.559212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.559389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.559415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.559563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.559605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.559818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.559844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.559989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.560030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.560201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.560225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.560373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.560416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.560632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.560672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.560877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.560905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.561104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.561130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.561296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.561324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.561491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.561518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.561723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.561750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.561910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.561935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.562137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.562169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.562361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.562389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.562550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.562577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.562773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.562798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.562991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.563020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.563210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.563238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.563432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.563460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.563657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.563682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.563884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.563912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.564094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.564122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.564310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.564338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.564512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.564536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.564727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.564757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.564924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.564952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.564995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f8470 (9): Bad file descriptor 00:33:49.546 [2024-07-23 06:29:42.565286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.565325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.565501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.565546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.565703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.565732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.565914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.565940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.566136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.566179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.566358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.546 [2024-07-23 06:29:42.566401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.546 qpair failed and we were unable to recover it. 00:33:49.546 [2024-07-23 06:29:42.566582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.566610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.566802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.566828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.567007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.567052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.567248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.567300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.567475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.567501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.567696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.567740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.567935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.567964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.568191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.568235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.568383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.568410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.568620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.568647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.568859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.568888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.569074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.569118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.569292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.569335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.569511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.569536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.569721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.569766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.569961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.569990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.570213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.570255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.570439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.570465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.570609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.570641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.570818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.570861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.571027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.571074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.571242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.571288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.571438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.571465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.571663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.571692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.571880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.571922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.572106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.572149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.572323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.572348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.572506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.572534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.572711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.572740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.572902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.572929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.573136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.573182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.573345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.573373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.573565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.573590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.573756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.573781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.573957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.573985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.574184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.574211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.574382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.574407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.574577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.547 [2024-07-23 06:29:42.574602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.547 qpair failed and we were unable to recover it. 00:33:49.547 [2024-07-23 06:29:42.574790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.574815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.575083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.575128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.575315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.575343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.575506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.575534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.575722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.575749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.575912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.575941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.576107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.576135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.576298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.576339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.576528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.576556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.576733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.576763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.576933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.576962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.577177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.577205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.577368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.577396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.577594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.577625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.577776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.577801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.577992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.578020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.578234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.578262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.578480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.578508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.578684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.578710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.578884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.578909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.579138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.579166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.579322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.579349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.579564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.579592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.579786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.579811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.579979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.580008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.580192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.580220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.580410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.580438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.580610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.580667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.580845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.580871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.581067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.581095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.581266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.581295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.548 [2024-07-23 06:29:42.581517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.548 [2024-07-23 06:29:42.581546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.548 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.581746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.581772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.581939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.581967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.582164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.582209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.582404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.582431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.582631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.582660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.582855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.582880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.583071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.583098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.583307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.583335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.583524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.583552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.583752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.583777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.583979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.584008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.584225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.584274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.584442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.584470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.584680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.584706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.584888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.584912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.585145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.585170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.585399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.585427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.585591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.585622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.585801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.585826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.586017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.586045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.586285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.586325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.586515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.586543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.586714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.586739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.586890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.586916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.587149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.587177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.587411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.587438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.587635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.587660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.587836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.587861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.588040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.588067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.588269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.588314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.588503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.588531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.588707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.588732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.588905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.588930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.589124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.589152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.589341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.589368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.589536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.589561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.589789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.589817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.549 [2024-07-23 06:29:42.590040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.549 [2024-07-23 06:29:42.590065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.549 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.590210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.590236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.590427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.590455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.590611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.590647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.590842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.590867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.591033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.591062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.591247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.591275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.591452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.591478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.591668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.591698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.591857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.591886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.592087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.592112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.592310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.592337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.592531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.592555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.592730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.592756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.592991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.593019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.593208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.593235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.593443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.593469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.593643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.593668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.593820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.593845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.593992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.594018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.594210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.594239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.594434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.594462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.594641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.594668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.594818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.594861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.595064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.595089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.595260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.595285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.595424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.595449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.595621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.595664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.595844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.595869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.596068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.596096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.596255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.596283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.596480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.596504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.596705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.596734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.596922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.596949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.597149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.597174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.597361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.597393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.597623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.597651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.597851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.597876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.598045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.550 [2024-07-23 06:29:42.598073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.550 qpair failed and we were unable to recover it. 00:33:49.550 [2024-07-23 06:29:42.598270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.598298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.598519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.598544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.598718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.598747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.598906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.598934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.599109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.599134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.599307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.599332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.599545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.599573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.599775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.599800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.599998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.600027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.600218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.600246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.600454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.600480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.600681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.600710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.600879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.600907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.601077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.601103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.601251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.601277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.601457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.601482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.601679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.601705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.601873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.601901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.602117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.602144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.602342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.602368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.602556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.602584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.602751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.602780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.602945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.602971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.603153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.603185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.603410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.603438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.603634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.603661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.603805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.603831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.603998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.604026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.604220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.604246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.604411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.604438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.604658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.604686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.604852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.604877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.605065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.605093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.605289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.605316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.605514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.605539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.605681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.605707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.605856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.605899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.606088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.551 [2024-07-23 06:29:42.606114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.551 qpair failed and we were unable to recover it. 00:33:49.551 [2024-07-23 06:29:42.606291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.606316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.606460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.606500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.606670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.606707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.606872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.606900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.607124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.607149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.607303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.607329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.607506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.607532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.607720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.607748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.607926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.607951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.608127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.608153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.608297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.608322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.608498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.608523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.608729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.608761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.608990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.609015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.609189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.609214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.609393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.609421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.609580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.609607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.609811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.609837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.610038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.610065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.610257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.610286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.610490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.610516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.610702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.610729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.610912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.610939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.611116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.611141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.611311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.611337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.611511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.611538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.611736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.611761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.611988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.612015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.612204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.612233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.612409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.612435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.612642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.612671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.612859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.612886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.613084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.613110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.613302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.613329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.613488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.613515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.613681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.613706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.613865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.613890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.614059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.552 [2024-07-23 06:29:42.614085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.552 qpair failed and we were unable to recover it. 00:33:49.552 [2024-07-23 06:29:42.614230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.614255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.614431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.614457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.614611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.614643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.614798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.614824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.615008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.615034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.615214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.615240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.615430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.615455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.615624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.615651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.615840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.615866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.616033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.616059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.616277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.616304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.616484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.616510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.616681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.616708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.616861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.616887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.617072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.617097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.617291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.617317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.617502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.617527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.617683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.617709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.617879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.617905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.618080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.618106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.618304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.618329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.618516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.618542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.618727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.618753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.618925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.618951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.619104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.619129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.619278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.619303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.619447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.619472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.619621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.619647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.619799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.619824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.620007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.620033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.620176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.620201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.620358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.620384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.553 [2024-07-23 06:29:42.620582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.553 [2024-07-23 06:29:42.620607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.553 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.620782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.620808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.620979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.621004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.621181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.621206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.621382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.621407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.621563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.621588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.621744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.621769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.621947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.621972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.622168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.622194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.622355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.622381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.622554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.622583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.622729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.622754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.622904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.622930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.623115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.623141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.623316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.623341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.623511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.623536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.623747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.623773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.623924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.623949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.624200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.624225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.624476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.624501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.624663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.624689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.624859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.624884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.625080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.625105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.625300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.625325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.625501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.625526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.625694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.625720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.625897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.625922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.626099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.626124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.626266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.626290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.626446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.626471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.626646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.626676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.626819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.626844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.627017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.627043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.627193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.627218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.627394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.627418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.627592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.627623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.627817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.627843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.628015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.628043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.554 [2024-07-23 06:29:42.628221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.554 [2024-07-23 06:29:42.628246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.554 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.628394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.628419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.628572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.628597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.628754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.628779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.628946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.628971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.629123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.629148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.629295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.629321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.629473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.629499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.629688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.629715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.629889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.629914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.630089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.630113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.630290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.630316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.630569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.630594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.630810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.630836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.631086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.631111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.631283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.631309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.631456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.631482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.631659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.631685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.631897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.631922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.632127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.632152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.632329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.632354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.632496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.632521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.632675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.632701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.632876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.632902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.633070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.633095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.633270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.633295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.633462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.633487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.633661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.633687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.633873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.633899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.634068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.634094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.634266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.634291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.634441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.634466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.634643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.634669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.634812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.634837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.635013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.635039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.635209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.635234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.635387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.635414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.635603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.635633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.555 [2024-07-23 06:29:42.635839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.555 [2024-07-23 06:29:42.635864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.555 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.636009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.636034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.636214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.636254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.636443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.636471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.636652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.636680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.636892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.636918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.637064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.637090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.637244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.637269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.637447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.637473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.637733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.637759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.637935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.637961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.638108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.638133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.638306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.638331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.638474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.638499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.638728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.638767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.638953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.638980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.639164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.639190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.639366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.639391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.639571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.639596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.639757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.639784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.639937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.639963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.640116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.640141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.640310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.640335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.640484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.640508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.640693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.640719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.640894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.640920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.641070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.641095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.641271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.641297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.641453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.641478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.641645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.641688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.641846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.641873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.642051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.642076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.642223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.642249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.642401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.642426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.642583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.642609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.642794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.642821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.642998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.643024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.643177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.643202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.556 [2024-07-23 06:29:42.643354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.556 [2024-07-23 06:29:42.643378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.556 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.643528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.643554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.643743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.643770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.643979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.644004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.644149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.644179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.644355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.644380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.644584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.644610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.644830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.644855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.645020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.645045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.645201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.645228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.645424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.645450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.645622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.645648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.645802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.645827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.645968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.645993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.646158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.646183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.646372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.646398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.646551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.646577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.646771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.646797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.646948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.646975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.647152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.647177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.647355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.647380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.647566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.647591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.647802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.647827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.647979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.648004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.648201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.648226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.648397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.648423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.648571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.648597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.648780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.648807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.648980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.649005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.649174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.649199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.649342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.649367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.649549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.649576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.649735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.649761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.649900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.649925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.557 qpair failed and we were unable to recover it. 00:33:49.557 [2024-07-23 06:29:42.650110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.557 [2024-07-23 06:29:42.650135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.650313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.650338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.650536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.650561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.650708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.650737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.650888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.650913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.651083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.651108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.651281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.651306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.651452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.651478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.651655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.651682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.651873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.651898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.652042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.652071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.652242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.652268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.652441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.652467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.652640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.652666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.652842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.652868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.653042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.653068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.653246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.653271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.653423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.653448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.653622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.653648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.653796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.653822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.653997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.654022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.654195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.654220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.654369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.654394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.654537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.654562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.654736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.654761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.654930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.654955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.655156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.655181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.655380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.655405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.655580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.655605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.655796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.655821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.655980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.656005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.656204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.656229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.656374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.656400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.656543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.656570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.656734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.656761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.656933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.656959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.558 [2024-07-23 06:29:42.657108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.558 [2024-07-23 06:29:42.657134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.558 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.657304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.657330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.657480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.657505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.657678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.657703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.657862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.657887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.658038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.658063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.658231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.658257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.658399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.658424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.658606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.658636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.658788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.658815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.658992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.659017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.659187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.659212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.659415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.659440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.659618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.659645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.659819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.659849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.660006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.660031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.660185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.660210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.660380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.660405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.660577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.660602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.660785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.660811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.660959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.660985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.661129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.661156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.661354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.661379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.661552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.661578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.661762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.661789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.661959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.662153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.662179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.662356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.662383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.662567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.662593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.662757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.662783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.662930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.662956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.663133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.663158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.663336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.663361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.663497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.663523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.663697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.663724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.663928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.663953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.664128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.664153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.664301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.559 [2024-07-23 06:29:42.664327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.559 qpair failed and we were unable to recover it. 00:33:49.559 [2024-07-23 06:29:42.664482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.664509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.664681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.664708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.664863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.664888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.665067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.665093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.665246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.665271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.665410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.665435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.665637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.665663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.665835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.665861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.666011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.666037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.666193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.666218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.666413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.666439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.666588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.666618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.666764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.666790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.666993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.667018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.667193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.667220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.667360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.667385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.667582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.667611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.667793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.667819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.667996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.668021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.668168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.668194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.668371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.668396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.668548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.668573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.668740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.668766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.668939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.668964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.669118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.669144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.669285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.669310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.669486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.669511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.669686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.669712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.669912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.669937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.670079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.670104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.670284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.670309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.670454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.670479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.670646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.670672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.670816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.670842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.671021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.671047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.671187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.671212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.671419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.671444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.560 [2024-07-23 06:29:42.671587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.560 [2024-07-23 06:29:42.671619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.560 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.671774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.671800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.671979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.672004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.672201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.672227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.672407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.672432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.672600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.672641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.672844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.672869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.673039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.673064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.673263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.673288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.673460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.673485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.673684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.673710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.673911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.673937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.674108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.674134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.674286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.674311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.674513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.674538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.674719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.674745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.674893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.674918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.675126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.675151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.675299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.675324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.675526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.675555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.675709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.675736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.675914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.675940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.676096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.676121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.676329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.676354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.676527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.676554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.676731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.676757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.676956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.676982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.677158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.677184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.677384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.677409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.677578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.677603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.677755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.677781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.677929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.677954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.678150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.678176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.678350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.678376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.678553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.678578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.678758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.678783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.678958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.678984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.679161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.561 [2024-07-23 06:29:42.679187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.561 qpair failed and we were unable to recover it. 00:33:49.561 [2024-07-23 06:29:42.679395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.679420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.679599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.679628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.679784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.679809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.679948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.679974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.680124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.680149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.680322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.680347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.680498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.680523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.680703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.680729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.680896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.680938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.681129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.681157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.681313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.681342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.681496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.681522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.681700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.681728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.681878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.681905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.682078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.682104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.682305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.682332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.682509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.682536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.682694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.682722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.682878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.682904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.683082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.683108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.683312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.683338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.683536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.683565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.683740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.683766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.683922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.683947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.684119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.684144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.684326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.684352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.684551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.684576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.684735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.684762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.684939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.684965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.685106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.685131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.685333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.685359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.685534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.562 [2024-07-23 06:29:42.685559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.562 qpair failed and we were unable to recover it. 00:33:49.562 [2024-07-23 06:29:42.685719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.685746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.685929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.685954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.686131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.686156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.686306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.686331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.686508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.686533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.686677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.686703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.686881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.686906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.687077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.687102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.687272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.687297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.687454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.687479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.687627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.687653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.687830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.687856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.688003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.688028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.688200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.688226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.688402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.688427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.688603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.688640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.688783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.688813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.688991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.689016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.689169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.689194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.689345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.689370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.689540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.689565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.689720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.689746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.689944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.689970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.690150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.690175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.690360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.690385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.690561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.690586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.690732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.690758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.690968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.690994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.691138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.691164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.691310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.691335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.691511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.691537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.691740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.691765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.691914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.691938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.692111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.692136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.692282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.692307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.692486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.692511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.692655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.692681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.563 [2024-07-23 06:29:42.692853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.563 [2024-07-23 06:29:42.692880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.563 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.693053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.693080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.693222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.693247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.693427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.693452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.693627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.693653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.693802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.693828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.694011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.694036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.694215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.694240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.694386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.694411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.694559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.694586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.694739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.694766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.694940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.694966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.695111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.695136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.695283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.695309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.695481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.695507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.695679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.695706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.695860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.695885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.696027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.696052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.696250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.696275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.696470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.696499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.696675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.696701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.696874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.696899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.697076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.697101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.697300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.697325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.697524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.697549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.697700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.697726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.697898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.697923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.698128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.698152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.698298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.698323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.698501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.698525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.698699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.698724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.698871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.698897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.699068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.699094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.699247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.699273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.699418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.699443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.699646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.699671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.699850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.699875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.700073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.700098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.564 qpair failed and we were unable to recover it. 00:33:49.564 [2024-07-23 06:29:42.700244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.564 [2024-07-23 06:29:42.700269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.700420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.700445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.700635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.700661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.700838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.700863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.701049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.701075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.701247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.701271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.701418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.701443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.701625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.701651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.701836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.701861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.702017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.702042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.702217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.702242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.702394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.702419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.702625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.702651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.702790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.702815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.702972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.702997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.703138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.703163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.703363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.703387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.703590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.703620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.703804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.703829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.703981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.704008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.704169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.704194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.704369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.704398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.704569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.704595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.704776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.704801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.704976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.705152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.705179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.705363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.705388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.705548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.705574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.705735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.705761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.705958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.705983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.706133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.706158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.706333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.706358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.706532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.706559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.706734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.706760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.706956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.706982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.707159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.707185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.707336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.707361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.707515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.565 [2024-07-23 06:29:42.707540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.565 qpair failed and we were unable to recover it. 00:33:49.565 [2024-07-23 06:29:42.707719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.707744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.707894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.707919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.708091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.708116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.708288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.708313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.708463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.708488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.708666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.708691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.708848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.708873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.709044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.709069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.709246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.709271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.709414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.709439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.709623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.709649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.709802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.709826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.709982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.710008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.710189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.710214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.710379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.710404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.710577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.710603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.710783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.710808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.710991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.711016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.711190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.711216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.711365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.711390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.711590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.711619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.711763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.711788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.711957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.711982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.712126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.712156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.712332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.712357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.712533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.712558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.712735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.712761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.712906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.712932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.713127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.713152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.713332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.713358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.713535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.713561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.713761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.713787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.713958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.713983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.714157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.714182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.714325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.714349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.714522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.714547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.714690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.714717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.566 [2024-07-23 06:29:42.714901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.566 [2024-07-23 06:29:42.714926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.566 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.715079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.715104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.715271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.715296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.715472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.715497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.715666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.715692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.715904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.715930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.716129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.716154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.716307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.716332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.716508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.716533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.716707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.716734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.716906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.716931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.717107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.717132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.717301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.717326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.717483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.717508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.717677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.717703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.717849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.717875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.718044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.718069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.718222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.718247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.718396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.718421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.718592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.718622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.718827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.718852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.719050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.719075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.719252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.719277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.719476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.719501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.719708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.719733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.719907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.719932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.720081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.720111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.720263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.720288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.720463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.720488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.720668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.720693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.720892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.720917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.721086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.721111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.721259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.567 [2024-07-23 06:29:42.721284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.567 qpair failed and we were unable to recover it. 00:33:49.567 [2024-07-23 06:29:42.721432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.721458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.721636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.721662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.721818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.721843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.722013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.722038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.722239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.722264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.722429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.722454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.722599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.722634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.722791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.722816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.722986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.723011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.723184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.723209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.723381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.723406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.723560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.723585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.723738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.723763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.723936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.723961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.724109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.724134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.724310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.724336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.724507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.724533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.724685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.724712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.724910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.724935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.725143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.725168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.725341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.725367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.725547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.725572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.725749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.725775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.725914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.725940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.726112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.726138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.726314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.726340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.726484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.726509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.726661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.726686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.726854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.726879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.727054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.727081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.727276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.727301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.727487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.727512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.727687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.727716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.727869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.727900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.728052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.728079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.728253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.728279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.728457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.728489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.568 qpair failed and we were unable to recover it. 00:33:49.568 [2024-07-23 06:29:42.728699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.568 [2024-07-23 06:29:42.728726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.728931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.728956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.729098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.729123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.729296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.729321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.729487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.729512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.729649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.729675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.729880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.729906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.730083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.730108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.730254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.730279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.730421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.730446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.730585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.730610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.730758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.730783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.730949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.730974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.731125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.731151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.731325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.731350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.731526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.731551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.731718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.731744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.731884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.731909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.732087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.732112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.732276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.732301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.732446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.732470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.732674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.732700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.732878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.732903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.733059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.733084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.733234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.733259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.733411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.733437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.733623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.733649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.733833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.733858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.733999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.734025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.734225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.734250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.734419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.734444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.734622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.734648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.734800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.734827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.735038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.735063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.735241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.735267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.735445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.735470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.735624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.735654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.735797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.569 [2024-07-23 06:29:42.735822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.569 qpair failed and we were unable to recover it. 00:33:49.569 [2024-07-23 06:29:42.736000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.736025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.736192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.736218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.736365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.736390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.736568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.736592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.736766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.736792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.736946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.736972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.737177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.737202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.737343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.737369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.737572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.737597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.737768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.737794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.737965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.737991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.738171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.738196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.738376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.738401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.738573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.738599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.738750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.738776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.738918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.738944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.739088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.739113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.739292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.739319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.739535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.739561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.739751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.739778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.739951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.739976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.740144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.740170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.740342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.740367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.740535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.740560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.740709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.740736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.740888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.740915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.741058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.741083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.741257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.741283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.741487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.741512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.741663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.741689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.741897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.741923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.742096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.742123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.742299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.742324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.742518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.742543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.742723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.742749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.742930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.742955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.743111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.743136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.570 [2024-07-23 06:29:42.743309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.570 [2024-07-23 06:29:42.743334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.570 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.743498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.743528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.743700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.743727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.743912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.743937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.744084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.744109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.744261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.744287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.744488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.744514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.744701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.744727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.744878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.744904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.745077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.745102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.745281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.745306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.745509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.745534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.745681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.745707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.745879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.745904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.746047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.746073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.746255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.746281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.746428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.746453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.746626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.746652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.746831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.746857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.747010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.747035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.747187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.747212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.747385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.747410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.747559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.747585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.747767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.747792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.747963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.747988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.748143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.748168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.748341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.748367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.748564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.748589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.748756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.748783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.748956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.748981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.749152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.749177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.749348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.749374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.749548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.571 [2024-07-23 06:29:42.749574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.571 qpair failed and we were unable to recover it. 00:33:49.571 [2024-07-23 06:29:42.749726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.749752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.749934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.749960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.750132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.750157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.750332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.750357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.750535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.750561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.750729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.750755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.750925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.750952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.751129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.751154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.751307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.751337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.751511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.751538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.751712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.751738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.751888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.751915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.752092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.752117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.752286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.752311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.752465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.752491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.752677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.752703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.752858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.752885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.753061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.753086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.753266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.753292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.753467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.753492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.753646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.753672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.753821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.753846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.754050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.754076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.754226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.754252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.754452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.754478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.754625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.754651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.754796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.754821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.754999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.755025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.755198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.755224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.755396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.755421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.755561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.755586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.755775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.755801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.755975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.756001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.756172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.756197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.756366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.756392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.756597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.756629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.756838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.756863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.572 [2024-07-23 06:29:42.757038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.572 [2024-07-23 06:29:42.757063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.572 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.757202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.757227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.757404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.757429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.757601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.757641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.757822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.757847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.758015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.758041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.758185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.758211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.758354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.758380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.758554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.758580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.758762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.758788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.758935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.758961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.759131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.759160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.759330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.759357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.759555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.759580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.759733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.759759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.759933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.759960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.760135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.760161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.760340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.760366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.760540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.760567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.760753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.760778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.760921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.760947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.761118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.761143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.761292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.761318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.761518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.761544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.761696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.761722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.761866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.761892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.762065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.762090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.762236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.762262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.762401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.762426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.762611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.762641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.762816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.762841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.762987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.763012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.763161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.763186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.763361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.763386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.763542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.763567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.763778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.763804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.764004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.764030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.764202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.573 [2024-07-23 06:29:42.764229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.573 qpair failed and we were unable to recover it. 00:33:49.573 [2024-07-23 06:29:42.764390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.764416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.764586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.764617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.764765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.764790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.764943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.764969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.765152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.765177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.765338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.765363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.765533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.765559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.765739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.765765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.765944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.765969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.766146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.766171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.766313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.766339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.766482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.766507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.766659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.766685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.766860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.766889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.767061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.767086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.767262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.767287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.767452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.767477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.767650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.767676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.767852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.767878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.768034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.768060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.768255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.768281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.768461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.768486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.768642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.768668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.768819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.768844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.768988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.769013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.769185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.769211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.769386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.769412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.769592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.769623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.769797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.769823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.770004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.770030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.770173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.770199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.770370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.770397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.770542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.770568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.770724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.770750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.770895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.770920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.771098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.771123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.771320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.771344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.574 [2024-07-23 06:29:42.771492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-23 06:29:42.771517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.574 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.771684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.771710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.771855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.771881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.772052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.772077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.772226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.772251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.772420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.772446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.772622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.772648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.772797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.772822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.772994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.773020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.773187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.773212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.773365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.773392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.773591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.773622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.773771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.773796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.773968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.773993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.774193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.774218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.774397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.774422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.774597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.774637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.774811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.774836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.775007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.775032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.775203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.775228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.775406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.775432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.775611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.775643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.775847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.775872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.776021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.776046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.776227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.776252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.776405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.776430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.776604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.776635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.776781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.776807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.776979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.777004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.777150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.777175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.777322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.777347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.777546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.777572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.777724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.777750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.777924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.777949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.778116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.778141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.778296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.778321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.778503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.778528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.778676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.778701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.575 [2024-07-23 06:29:42.778879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.575 [2024-07-23 06:29:42.778905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.575 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.779051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.779077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.779233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.779259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.779430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.779455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.779627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.779653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.779808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.779835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.780001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.780027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.780228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.780253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.780437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.780462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.780632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.780659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.780836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.780862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.781035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.781060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.781206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.781231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.781401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.781426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.781627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.781653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.781806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.781833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.782008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.782033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.782203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.782229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.782384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.782413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.782589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.782620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.782763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.782788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.782932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.782958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.783137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.783162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.783347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.783372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.783546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.783571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.783721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.783746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.783917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.783943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.784112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.784137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.784333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.784358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.784533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.784560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.784713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.784739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.784880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.784905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.785084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.576 [2024-07-23 06:29:42.785109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.576 qpair failed and we were unable to recover it. 00:33:49.576 [2024-07-23 06:29:42.785284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.785310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.785457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.785483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.785659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.785685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.785860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.785885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.786064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.786089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.786265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.786291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.786464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.786489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.786645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.786671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.786817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.786844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.786995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.787021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.787173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.787199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.787371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.787397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.787540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.787569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.787726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.787751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.787917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.787943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.788088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.788113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.788264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.788289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.788489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.788514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.788669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.788695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.788846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.788873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.789017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.789043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.789221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.789246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.789426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.789451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.789638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.789664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.789862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.789887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.790035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.790060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.790264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.790290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.790432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.790458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.790610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.790640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.790791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.790816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.790962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.790987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.791131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.791156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.791306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.791331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.791535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.791560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.791728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.791754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.791933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.791958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.792102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.792128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.577 [2024-07-23 06:29:42.792323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.577 [2024-07-23 06:29:42.792349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.577 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.792549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.792574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.792762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.792788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.792966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.792991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.793166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.793192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.793338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.793363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.793563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.793588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.793756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.793782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.793956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.793982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.794157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.794182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.794357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.794382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.794530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.794556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.794737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.794763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.794937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.794963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.795136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.795161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.795338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.795369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.795509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.795535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.795710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.795736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.795892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.795917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.796088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.796113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.796286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.796311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.796453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.796478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.796646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.796672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.796850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.796875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.797015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.797040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.797217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.797242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.797386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.797410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.797554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.797580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.797761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.797787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.797994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.798019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.798227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.798253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.798395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.798420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.798594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.798624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.798781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.798806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.798981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.799007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.799204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.799229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.799406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.799433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.799587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.799616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.578 qpair failed and we were unable to recover it. 00:33:49.578 [2024-07-23 06:29:42.799794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.578 [2024-07-23 06:29:42.799819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.799965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.799990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.800138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.800162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.800306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.800332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.800508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.800534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.800709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.800734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.800912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.800937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.801117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.801142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.801294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.801319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.801493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.801518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.801691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.801717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.801890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.801915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.802063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.802088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.802294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.802319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.802496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.802521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.802698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.802724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.802897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.802923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.803094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.803124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.803323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.803349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.803504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.803530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.803737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.803763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.803936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.803961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.804110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.804136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.804278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.804305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.804486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.804510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.804658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.804684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.804888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.804913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.805065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.805090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.805269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.805294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.805467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.805493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.805693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.805719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.805921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.805946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.806117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.806143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.806296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.806321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.806469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.806495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.806673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.806698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.806848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.806873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.807045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.579 [2024-07-23 06:29:42.807070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.579 qpair failed and we were unable to recover it. 00:33:49.579 [2024-07-23 06:29:42.807242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.807267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.807412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.807436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.807591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.807621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.807772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.807798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.807935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.807961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.808162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.808187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.808334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.808359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.808533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.808558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.808730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.808755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.808899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.808924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.809101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.809126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.809275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.809300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.809453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.809478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.809649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.809675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.809847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.809873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.810012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.810038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.810185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.810210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.810406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.810432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.810580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.810605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.810790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.810819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.810974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.811000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.811142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.811168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.811340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.811365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.811519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.811544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.811725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.811750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.811926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.811951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.812099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.812124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.812299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.812324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.812469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.812494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.812694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.812720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.812876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.812901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.813072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.813099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.813273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.813298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.813503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.813528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.813703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.813729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.813881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.813905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.814075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.814101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.580 [2024-07-23 06:29:42.814249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.580 [2024-07-23 06:29:42.814274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.580 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.814445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.814470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.814637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.814663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.814859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.814884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.815031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.815056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.815261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.815287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.815435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.815459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.815602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.815632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.815837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.815862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.816019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.816044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.816220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.816247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.816420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.816446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.816591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.816621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.816795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.816820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.816975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.817001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.817178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.817204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.817376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.817401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.817576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.817601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.817766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.817792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.817971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.817996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.818144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.818169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.818342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.818367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.818541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.818570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.818754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.818780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.818933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.818958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.819130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.819154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.819355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.819380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.819521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.819547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.819750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.819776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.819928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.819953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.820153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.820177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.820346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.581 [2024-07-23 06:29:42.820372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.581 qpair failed and we were unable to recover it. 00:33:49.581 [2024-07-23 06:29:42.820549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.820574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.820784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.820809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.820978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.821004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.821177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.821202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.821374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.821399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.821574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.821599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.821783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.821808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.821984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.822010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.822183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.822208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.822377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.822402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.822571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.822596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.822772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.822798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.822969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.822994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.823139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.823164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.823365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.823390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.823563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.823588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.823765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.823790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.823941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.823965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.824162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.824187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.824331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.824356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.824531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.824556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.824712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.824738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.824880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.824905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.825102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.825127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.825305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.825331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.825529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.825555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.825705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.825732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.825906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.825931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.826099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.826125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.826272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.826297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.826498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.826527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.826700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.826725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.826924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.826948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.827146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.827171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.827342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.827368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.827538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.827562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.827714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.827741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.827943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.582 [2024-07-23 06:29:42.827969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.582 qpair failed and we were unable to recover it. 00:33:49.582 [2024-07-23 06:29:42.828137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.828162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.828358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.828383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.828521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.828545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.828689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.828714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.828872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.828897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.829070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.829095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.829272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.829297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.829450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.829476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.829623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.829648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.829825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.829850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.830006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.830033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.830206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.830232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.830378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.830403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.830548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.830573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.830722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.830747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.830945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.830970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.831145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.831170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.831339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.831365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.831529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.831554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.831711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.831737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.831910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.831935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.832112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.832141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.832297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.832323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.832502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.832529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.832681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.832707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.832855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.832880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.833027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.833052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.833201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.833226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.833404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.833429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.833604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.833645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.833824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.833849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.834001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.834027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.834171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.834202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.834380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.834405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.834555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.834580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.834758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.834783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.834959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.834984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.583 [2024-07-23 06:29:42.835161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.583 [2024-07-23 06:29:42.835186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.583 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.835380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.835405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.835599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.835629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.835774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.835799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.835975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.836001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.836179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.836206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.836385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.836410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.836624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.836651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.836793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.836820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.837024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.837050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.837223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.837248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.837414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.837439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.837588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.837620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.837800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.837825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.837999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.838025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.838234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.838259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.838427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.838452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.838595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.838626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.838772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.838797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.838949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.838975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.839123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.839148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.839294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.839318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.839474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.839499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.839676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.839702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.839854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.839880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.840080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.840105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.840311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.840336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.840490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.840515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.840656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.840682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.840855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.840880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.841056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.841081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.841256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.841282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.841480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.841505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.841680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.841706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.841851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.841876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.842024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.842053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.842217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.842243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.842413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.842438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.584 qpair failed and we were unable to recover it. 00:33:49.584 [2024-07-23 06:29:42.842610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.584 [2024-07-23 06:29:42.842641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.842818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.842844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.842997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.843025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.843197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.843222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.843396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.843422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.843597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.843629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.843778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.843803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.843951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.843976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.844153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.844179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.844378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.844403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.844572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.844597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.844754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.844779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.844947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.844972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.845145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.845170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.845367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.845392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.845586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.845611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.845792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.845818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.845968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.845993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.846135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.846161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.846303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.846330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.846505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.846531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.846684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.846709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.846897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.846922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.847071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.847096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.847267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.847292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.847443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.847468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.847639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.847665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.847819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.847844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.848020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.848045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.848189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.848214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.848365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.848391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.848565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.848590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.585 qpair failed and we were unable to recover it. 00:33:49.585 [2024-07-23 06:29:42.848776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.585 [2024-07-23 06:29:42.848801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.848979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.849004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.849171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.849196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.849346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.849372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.849542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.849568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.849714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.849744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.849947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.849973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.850145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.850172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.850346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.850372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.850548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.850574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.850754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.850780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.850952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.850977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.851144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.851169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.851341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.851365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.851517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.851543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.851719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.851745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.851918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.851943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.852120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.852145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.852297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.852324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.852472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.852497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.852668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.852694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.852837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.852862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.853030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.853055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.853202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.853229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.853431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.853456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.853628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.853654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.853832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.853857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.854006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.854031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.854173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.854198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.854345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.854372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.854549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.854574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.854755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.854781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.854938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.854963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.855163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.855188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.855362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.855388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.855536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.855561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.855736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.855762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.855931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.855957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.856134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.856159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.856305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.586 [2024-07-23 06:29:42.856330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.586 qpair failed and we were unable to recover it. 00:33:49.586 [2024-07-23 06:29:42.856505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.856530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.856708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.856734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.856882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.856907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.857054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.857079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.857250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.857275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.857453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.857481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.857661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.857687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.857861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.857886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.858026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.858050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.858200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.858225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.858398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.858423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.858562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.858587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.858792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.858817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.858972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.858997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.859189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.859214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.859360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.859386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.859564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.859589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.859745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.859771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.859974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.859999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.860190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.860215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.860391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.860417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.860593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.860626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.860800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.860826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.860988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.861014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.861190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.861216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.861366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.861391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.861574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.861600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.861749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.861775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.861953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.861979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.862147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.862172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.862323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.862350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.862525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.862551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.862727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.862754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.862897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.862923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.863124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.863149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.863309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.863334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.863484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.863509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.863701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.863727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.863901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.863926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.864124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.864150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.864348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.587 [2024-07-23 06:29:42.864373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.587 qpair failed and we were unable to recover it. 00:33:49.587 [2024-07-23 06:29:42.864522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.588 [2024-07-23 06:29:42.864547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.588 qpair failed and we were unable to recover it. 00:33:49.588 [2024-07-23 06:29:42.864755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.588 [2024-07-23 06:29:42.864781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.588 qpair failed and we were unable to recover it. 00:33:49.588 [2024-07-23 06:29:42.864963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.588 [2024-07-23 06:29:42.864988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.588 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.865188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.865214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.865362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.865394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.865570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.865597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.865751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.865778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.865947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.865972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.866146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.866171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.866346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.866372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.866546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.866572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.866726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.866751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.866922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.866947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.867098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.867124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.867293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.867318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.867500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.867527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.867676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.867702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.867877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.867902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.868064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.868089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.868238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.868263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.868428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.868453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.868592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.868623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.868797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.868822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.869006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.869032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.869204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.869229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.871 [2024-07-23 06:29:42.869403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.871 [2024-07-23 06:29:42.869428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.871 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.869603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.869635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.869834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.869860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.870017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.870043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.870212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.870237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.870406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.870431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.870629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.870656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.870807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.870832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.871001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.871026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.871173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.871199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.871347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.871372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.871544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.871569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.871720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.871745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.871899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.871924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.872084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.872109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.872281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.872306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.872479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.872505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.872674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.872700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.872897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.872922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.873067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.873101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.873252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.873278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.873451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.873476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.873654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.873680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.873856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.873882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.874022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.874048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.874224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.874249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.874419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.874444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.874623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.874648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.874795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.874821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.874994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.875019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.875167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.875193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.875393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.875418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.875561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.875587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.875777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.875803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.875941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.875966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.876145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.876171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.876345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.876371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.876542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.872 [2024-07-23 06:29:42.876568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.872 qpair failed and we were unable to recover it. 00:33:49.872 [2024-07-23 06:29:42.876778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.876804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.876985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.877010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.877182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.877207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.877381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.877406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.877603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.877633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.877836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.877861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.878036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.878061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.878233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.878258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.878435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.878461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.878634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.878659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.878840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.878865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.879034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.879059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.879246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.879270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.879451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.879477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.879624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.879651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.879825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.879850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.880027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.880052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.880200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.880226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.880438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.880464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.880640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.880666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.880847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.880873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.881073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.881103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.881252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.881277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.881428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.881454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.881634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.881660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.881836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.881861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.882028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.882053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.882205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.882230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.882435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.882460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.882617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.882643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.882845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.882870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.883040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.883066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.883244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.883269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.883466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.883491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.883668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.883694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.883846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.883871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.884068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.884094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.873 [2024-07-23 06:29:42.884240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.873 [2024-07-23 06:29:42.884265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.873 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.884414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.884439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.884622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.884648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.884816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.884841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.885017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.885042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.885214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.885239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.885408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.885434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.885585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.885610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.885794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.885820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.886027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.886052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.886199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.886223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.886390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.886419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.886563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.886588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.886752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.886778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.886922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.886947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.887144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.887169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.887356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.887381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.887529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.887554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.887708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.887734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.887912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.887937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.888110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.888136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.888307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.888332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.888477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.888502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.888673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.888699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.888902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.888927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.889105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.889131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.889309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.889334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.889511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.889537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.889690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.889715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.889870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.889895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.890070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.890095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.890264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.890289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.890453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.890478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.890649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.890675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.890828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.890853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.891056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.891081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.891255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.891280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.891480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.891505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.874 qpair failed and we were unable to recover it. 00:33:49.874 [2024-07-23 06:29:42.891652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.874 [2024-07-23 06:29:42.891678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.891846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.891871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.892043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.892068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.892235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.892260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.892430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.892455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.892600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.892738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.892919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.892945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.893093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.893119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.893270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.893295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.893463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.893488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.893658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.893684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.893862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.893888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.894035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.894060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.894228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.894257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.894403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.894429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.894599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.894629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.894805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.894830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.895002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.895028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.895199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.895224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.895399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.895424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.895574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.895599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.895753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.895779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.895951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.895976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.896116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.896141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.896342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.896367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.896544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.896569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.896728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.896754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.896956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.896981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.897128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.897154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.897335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.897360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.897569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.897594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.897786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.897811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.898014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.898040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.875 [2024-07-23 06:29:42.898234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.875 [2024-07-23 06:29:42.898259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.875 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.898432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.898457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.898598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.898630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.898806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.898831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.899010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.899036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.899185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.899212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.899362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.899387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.899560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.899585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.899769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.899795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.899940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.899966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.900145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.900170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.900368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.900393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.900546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.900571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.900750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.900776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.900922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.900948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.901092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.901117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.901314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.901340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.901510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.901535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.901682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.901710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.901913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.901938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.902112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.902141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.902315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.902340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.902484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.902511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.902656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.902683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.902859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.902885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.903056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.903081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.903260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.903286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.903462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.903489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.903661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.903687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.903865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.903890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.904067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.904093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.904237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.904262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.904411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.904436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.904631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.904657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.876 [2024-07-23 06:29:42.904806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.876 [2024-07-23 06:29:42.904831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.876 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.904985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.905010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.905149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.905175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.905342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.905368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.905540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.905566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.905716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.905742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.905921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.905946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.906097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.906122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.906277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.906302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.906448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.906473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.906646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.906672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.906813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.906838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.907038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.907063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.907238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.907264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.907464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.907490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.907640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.907665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.907822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.907847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.908017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.908042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.908245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.908270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.908441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.908466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.908638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.908663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.908839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.908864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.909036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.909062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.909212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.909237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.909383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.909409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.909584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.909609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.909765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.909794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.909990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.910015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.910167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.910194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.910372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.910398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.910548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.910574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.910760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.910787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.910932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.910958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.911156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.911182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.911395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.911420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.911561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.911587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.911792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.911818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.911965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.877 [2024-07-23 06:29:42.911990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.877 qpair failed and we were unable to recover it. 00:33:49.877 [2024-07-23 06:29:42.912188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.912213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.912399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.912424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.912600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.912641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.912804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.912830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.912976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.913002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.913181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.913207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.913381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.913407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.913576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.913602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.913762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.913789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.913940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.913966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.914164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.914190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.914341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.914367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.914564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.914590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.914768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.914794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.914967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.914992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.915174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.915200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.915380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.915406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.915580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.915605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.915767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.915792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.915935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.915961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.916162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.916187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.916337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.916362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.916513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.916539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.916723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.916750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.916927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.916952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.917125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.917151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.917304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.917331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.917508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.917533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.917693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.917724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.917874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.917899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.918076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.918101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.918299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.918324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.918501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.918528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.918680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.918707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.918878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.918904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.919071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.919097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.919295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.919321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.919490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.878 [2024-07-23 06:29:42.919516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.878 qpair failed and we were unable to recover it. 00:33:49.878 [2024-07-23 06:29:42.919719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.919745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.919897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.919924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.920125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.920150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.920301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.920327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.920513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.920539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.920686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.920712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.920859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.920885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.921032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.921057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.921224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.921249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.921419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.921444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.921648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.921675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.921847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.921873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.922025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.922050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.922219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.922244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.922399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.922426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.922599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.922631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.922810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.922836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.923024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.923050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.923221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.923245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.923422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.923447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.923654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.923679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.923827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.923852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.924024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.924049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.924221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.924246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.924393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.924418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.924561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.924586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.924798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.924824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.924999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.925024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.925224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.925249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.925425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.925450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.925604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.925639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.925813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.925838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.926006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.926031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.926199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.926224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.926371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.926396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.926562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.926587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.926768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.926794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.879 [2024-07-23 06:29:42.926944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.879 [2024-07-23 06:29:42.926969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.879 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.927116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.927141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.927339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.927364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.927507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.927532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.927710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.927735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.927879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.927905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.928083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.928108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.928287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.928313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.928481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.928506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.928703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.928729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.928899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.928925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.929095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.929120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.929270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.929295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.929441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.929466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.929621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.929646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.929790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.929816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.930013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.930038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.930211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.930236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.930403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.930428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.930633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.930659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.930810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.930836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.930992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.931017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.931160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.931185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.931332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.931357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.931528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.931553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.931728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.931754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.931930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.931956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.932103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.932128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.932305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.932331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.932500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.932525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.932725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.932751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.932925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.932950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.880 [2024-07-23 06:29:42.933147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.880 [2024-07-23 06:29:42.933172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.880 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.933310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.933339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.933494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.933521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.933670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.933698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.933872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.933898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.934054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.934079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.934247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.934272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.934435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.934460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.934639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.934665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.934837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.934862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.935039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.935064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.935264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.935289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.935465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.935490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.935693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.935719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.935864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.935889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.936066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.936092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.936270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.936295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.936441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.936467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.936640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.936667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.936838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.936864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.937005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.937032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.937211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.937237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.937388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.937413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.937595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.937625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.937804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.937829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.937982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.938007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.938175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.938200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.938352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.938378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.938556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.938582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.938784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.938809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.938984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.939009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.939159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.939185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.939364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.939390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.939590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.939620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.939776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.939801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.939972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.939997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.940173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.940198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.940369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.940395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.881 qpair failed and we were unable to recover it. 00:33:49.881 [2024-07-23 06:29:42.940565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.881 [2024-07-23 06:29:42.940590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.940770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.940796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.940968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.940992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.941174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.941203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.941374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.941401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.941601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.941631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.941836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.941861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.942028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.942054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.942227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.942252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.942404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.942430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.942605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.942635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.942845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.942870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.943073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.943098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.943296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.943320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.943469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.943494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.943674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.943700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.943850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.943876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.944058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.944084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.944230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.944255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.944453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.944478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.944647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.944673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.944847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.944873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.945051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.945076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.945247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.945272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.945447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.945472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.945638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.945664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.945805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.945830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.946005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.946030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.946185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.946209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.946382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.946407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.946609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.946639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.946787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.946813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.946960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.946986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.947180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.947206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.947403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.947428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.947621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.947647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.947820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.947845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.947985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.882 [2024-07-23 06:29:42.948010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.882 qpair failed and we were unable to recover it. 00:33:49.882 [2024-07-23 06:29:42.948205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.948230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.948431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.948456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.948636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.948662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.948807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.948832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.949034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.949059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.949227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.949256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.949460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.949485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.949662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.949688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.949869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.949894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.950070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.950095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.950240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.950265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.950407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.950432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.950611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.950641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.950842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.950867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.951015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.951041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.951221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.951247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.951417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.951443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.951622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.951647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.951797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.951823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.951985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.952010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.952183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.952208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.952387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.952412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.952556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.952581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.952749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.952775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.952975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.953000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.953147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.953173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.953375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.953401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.953541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.953566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.953742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.953767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.953916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.953942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.954138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.954163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.954339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.954364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.954512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.954538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.954696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.954722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.954901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.954927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.955099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.955124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.955294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.955319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.955489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.883 [2024-07-23 06:29:42.955514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.883 qpair failed and we were unable to recover it. 00:33:49.883 [2024-07-23 06:29:42.955667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.955693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.955862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.955888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.956060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.956085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.956262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.956287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.956456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.956481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.956655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.956681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.956825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.956851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.957051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.957080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.957233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.957259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.957404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.957430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.957644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.957670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.957813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.957838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.958011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.958036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.958234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.958259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.958402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.958427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.958603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.958637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.958837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.958863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.959045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.959070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.959239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.959264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.959416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.959441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.959640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.959666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.959828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.959854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.960027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.960052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.960248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.960273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.960417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.960442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.960583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.960608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.960827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.960853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.961030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.961055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.961256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.961281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.961428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.961453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.961598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.961629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.961804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.961830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.962001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.962026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.962180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.962205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.962385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.962411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.962581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.962607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.962814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.962840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.884 [2024-07-23 06:29:42.963019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.884 [2024-07-23 06:29:42.963045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.884 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.963221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.963247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.963448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.963474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.963646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.963672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.963826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.963851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.964022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.964048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.964216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.964242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.964386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.964412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.964560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.964587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.964801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.964827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.964976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.965005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.965187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.965213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.965395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.965421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.965619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.965644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.965786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.965811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.966007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.966032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.966199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.966224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.966393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.966419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.966589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.966618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.966786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.966812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.966961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.966987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.967174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.967199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.967348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.967373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.967548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.967573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.967753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.967778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.967953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.967978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.968156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.968181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.968346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.968371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.968524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.968549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.968686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.968712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.968854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.968880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.969032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.969057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.885 [2024-07-23 06:29:42.969233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.885 [2024-07-23 06:29:42.969258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.885 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.969396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.969421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.969573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.969598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.969745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.969770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.969911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.969937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.970135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.970160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.970332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.970357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.970560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.970585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.970771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.970796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.970951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.970976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.971153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.971179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.971329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.971355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.971549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.971574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.971734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.971761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.971962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.971988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.972132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.972158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.972304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.972330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.972503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.972529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.972709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.972739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.972913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.972938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.973110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.973135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.973274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.973299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.973473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.973498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.973679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.973704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.973853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.973878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.974048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.974075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.974213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.974238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.974432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.974457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.974634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.974659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.974863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.974889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.975067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.975093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.975267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.975291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.975465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.975491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.975665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.975692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.975872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.975898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.976071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.976097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.976240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.976265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.976444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.886 [2024-07-23 06:29:42.976469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.886 qpair failed and we were unable to recover it. 00:33:49.886 [2024-07-23 06:29:42.976645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.976671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.976817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.976842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.976998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.977024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.977199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.977225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.977399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.977424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.977622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.977648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.977819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.977844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.978042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.978067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.978239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.978264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.978438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.978465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.978636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.978661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.978862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.978887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.979057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.979082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.979254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.979280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.979423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.979448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.979583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.979608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.979767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.979792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.979968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.979994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.980143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.980168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.980340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.980365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.980511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.980540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.980715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.980742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.980924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.980949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.981091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.981117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.981285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.981311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.981457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.981482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.981680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.981705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.981877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.981902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.982075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.982100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.982268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.982293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.982489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.982514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.982688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.982714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.982892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.982919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.983088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.983113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.983261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.983286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.983464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.983489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.983637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.983663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.887 [2024-07-23 06:29:42.983841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.887 [2024-07-23 06:29:42.983867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.887 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.984076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.984101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.984272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.984297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.984469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.984495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.984654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.984681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.984851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.984876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.985051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.985076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.985278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.985303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.985503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.985529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.985705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.985731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.985880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.985906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.986090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.986115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.986287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.986313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.986493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.986519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.986694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.986720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.986868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.986893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.987060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.987085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.987254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.987279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.987450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.987475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.987679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.987704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.987860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.987886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.988060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.988085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.988260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.988285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.988463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.988488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.988640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.988666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.988843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.988868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.989012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.989036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.989203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.989228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.989402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.989427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.989575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.989600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.989785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.989811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.989986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.990011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.990152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.990177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.990330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.990355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.990508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.990535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.990709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.990735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.990913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.990939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.991117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.991143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.888 [2024-07-23 06:29:42.991315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.888 [2024-07-23 06:29:42.991341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.888 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.991513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.991539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.991716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.991742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.991896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.991921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.992064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.992091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.992238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.992264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.992443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.992468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.992618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.992645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.992825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.992850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.993024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.993049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.993197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.993222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.993407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.993432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.993603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.993639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.993813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.993840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.994015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.994040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.994192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.994218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.994369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.994394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.994575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.994602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.994779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.994804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.994955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.994981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.995180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.995206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.995350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.995376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.995530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.995556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.995735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.995760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.995941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.995967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.996143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.996168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.996345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.996371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.996541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.996567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.996743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.996770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.996920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.996946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.997123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.997148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.997347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.997372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.889 [2024-07-23 06:29:42.997517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.889 [2024-07-23 06:29:42.997542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.889 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.997740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.997766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.997947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.997972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.998174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.998199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.998373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.998398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.998546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.998571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.998774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.998799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.998948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.998973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.999126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.999151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.999351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.999376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.999513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.999538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.999682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.999708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:42.999858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:42.999883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.000059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.000085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.000264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.000290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.000464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.000489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.000666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.000692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.000872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.000897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.001099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.001124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.001271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.001296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.001467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.001496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.001645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.001670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.001809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.001835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.001988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.002014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.002193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.002217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.002397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.002422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.002576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.002601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.002754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.002779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.002951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.002976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.003186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.003211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.003373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.003398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.003569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.003594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.003754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.003779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.003928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.003953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.004130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.004155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.004327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.004352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.004494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.004519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.004719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.004744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.890 qpair failed and we were unable to recover it. 00:33:49.890 [2024-07-23 06:29:43.004898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.890 [2024-07-23 06:29:43.004923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.005100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.005124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.005294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.005319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.005473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.005498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.005677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.005703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.005882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.005907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.006055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.006080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.006226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.006253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.006426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.006453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.006658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.006683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.006885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.006910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.007053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.007078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.007278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.007303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.007472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.007497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.007673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.007699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.007849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.007875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.008076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.008101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.008277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.008303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.008453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.008478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.008646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.008672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.008819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.008844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.008991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.009017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.009195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.009226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.009381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.009406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.009578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.009604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.009761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.009786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.009938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.009963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.010115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.010140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.010342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.010367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.010542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.010568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.010717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.010743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.010888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.010914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.011065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.011091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.011293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.011318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.011491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.011516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.011691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.011716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.011921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.011947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.012127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.891 [2024-07-23 06:29:43.012152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.891 qpair failed and we were unable to recover it. 00:33:49.891 [2024-07-23 06:29:43.012316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.012341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.012513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.012539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.012716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.012742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.012914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.012939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.013111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.013136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.013334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.013359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.013531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.013556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.013739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.013765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.013915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.013940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.014090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.014116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.014288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.014314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.014515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.014540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.014716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.014742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.014885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.014911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.015060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.015086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.015259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.015285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.015434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.015460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.015661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.015686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.015867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.015892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.016092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.016118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.016321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.016346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.016499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.016524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.016704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.016729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.016879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.016904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.017083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.017112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.017287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.017312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.017448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.017473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.017652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.017678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.017852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.017877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.018027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.018052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.018226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.018252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.018424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.018449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.018596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.018627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.018804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.018829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.018982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.019008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.019181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.019207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.019384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.019409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.019607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.892 [2024-07-23 06:29:43.019638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.892 qpair failed and we were unable to recover it. 00:33:49.892 [2024-07-23 06:29:43.019796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.019822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.019971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.019996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.020169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.020194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.020367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.020393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.020596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.020636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.020789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.020816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.020963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.020990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.021170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.021195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.021365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.021390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.021560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.021585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.021766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.021792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.021966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.021991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.022166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.022191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.022368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.022394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.022541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.022566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.022743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.022780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.022933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.022959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.023162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.023187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.023337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.023362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.023511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.023536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.023722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.023748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.023900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.023925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.024106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.024132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.024282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.024307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.024477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.024501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.024673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.024699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.024853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.024882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.025081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.025106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.025277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.025302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.025490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.025515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.025670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.025696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.025869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.025895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.026039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.026065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.026242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.026267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.026444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.026470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.026647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.026672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.026855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.026880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.893 [2024-07-23 06:29:43.027066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.893 [2024-07-23 06:29:43.027091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.893 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.027237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.027262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.027432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.027457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.027606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.027637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.027815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.027840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.028005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.028030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.028201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.028226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.028427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.028452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.028632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.028658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.028809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.028836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.029007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.029034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.029212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.029237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.029385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.029411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.029558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.029583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.029763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.029789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.029988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.030013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.030190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.030215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.030417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.030443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.030595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.030625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.030776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.030974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.031000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.031157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.031183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.031362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.031387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.031564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.031589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.031770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.031795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.031967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.031992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.032189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.032214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.032381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.032407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.032575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.032601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.032826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.032856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.033005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.033031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.033203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.033229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.894 [2024-07-23 06:29:43.033402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.894 [2024-07-23 06:29:43.033427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.894 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.033566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.033592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.033745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.033771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.033946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.033971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.034148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.034175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.034354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.034379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.034555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.034580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.034758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.034784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.034954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.034979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.035150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.035175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.035351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.035376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.035555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.035580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.035738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.035765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.035939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.035964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.036139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.036165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.036342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.036368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.036546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.036571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.036757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.036783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.036958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.036984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.037185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.037211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.037406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.037431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.037605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.037637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.037810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.037835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.037976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.038002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.038150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.038176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.038355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.038380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.038529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.038554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.038731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.038757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.038911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.038945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.039118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.039144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.039300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.039325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.039575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.039600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.039796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.039823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.040073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.040098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.040297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.040323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.040493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.040519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.040688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.040715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.040868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.040899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.895 qpair failed and we were unable to recover it. 00:33:49.895 [2024-07-23 06:29:43.041077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.895 [2024-07-23 06:29:43.041103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.041277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.041304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.041477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.041502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.041658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.041684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.041884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.041910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.042118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.042144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.042322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.042347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.042521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.042547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.042721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.042747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.042922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.042948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.043122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.043149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.043288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.043314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.043459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.043486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.043680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.043707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.043850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.043875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.044047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.044072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.044243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.044269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.044439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.044464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.044618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.044644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.044824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.044849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.045019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.045045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.045248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.045274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.045422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.045448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.045597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.045628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.045768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.045793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.045944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.045970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.046163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.046188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.046390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.046416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.046564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.046590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.046796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.046822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.047004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.047030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.047183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.047210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.047379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.047405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.047557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.047583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.047765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.047791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.047944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.047969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.048143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.048169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.048349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.896 [2024-07-23 06:29:43.048375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.896 qpair failed and we were unable to recover it. 00:33:49.896 [2024-07-23 06:29:43.048577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.048603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.048788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.048818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.048994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.049019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.049190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.049216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.049393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.049418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.049567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.049594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.049787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.049813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.049963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.049990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.050158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.050184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.050379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.050405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.050577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.050602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.050787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.050813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.050961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.050986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.051167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.051192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.051363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.051389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.051573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.051598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.051751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.051778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.051931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.051956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.052126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.052151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.052352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.052378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.052525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.052551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.052734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.052760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.052910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.052936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.053135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.053160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.053333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.053359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.053508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.053533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.053713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.053739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.053919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.053944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.054199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.054225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.054429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.054454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.054628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.054653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.054826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.054851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.055025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.055050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.055248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.055273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.055483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.055508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.055657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.055684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.055859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.055886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.056040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.897 [2024-07-23 06:29:43.056065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.897 qpair failed and we were unable to recover it. 00:33:49.897 [2024-07-23 06:29:43.056218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.056243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.056423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.056448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.056599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.056631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.056835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.056864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.057065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.057090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.057268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.057293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.057435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.057460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.057609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.057640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.057799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.057823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.057972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.057997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.058179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.058205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.058352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.058377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.058580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.058605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.058765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.058790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.058960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.058985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.059155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.059181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.059368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.059393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.059607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.059637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.059809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.059835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.059978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.060005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.060258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.060283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.060462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.060488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.060639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.060666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.060840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.060865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.061041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.061066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.061235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.061261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.061436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.061462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.061611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.061641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.061815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.061842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.062045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.062070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.062248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.062273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.062448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.062474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.062645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.062671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.062869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.062895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.063040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.063066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.063233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.063259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.063426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.063452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.898 [2024-07-23 06:29:43.063632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.898 [2024-07-23 06:29:43.063658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.898 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.063801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.063826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.063966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.063993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.064169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.064195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.064338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.064364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.064512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.064538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.064709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.064739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.064914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.064940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.065192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.065218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.065367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.065393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.065544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.065571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.065730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.065756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.065922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.065948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.066156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.066182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.066332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.066357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.066552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.066577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.066735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.066761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.066904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.066930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.067101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.067126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.067279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.067304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.067454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.067480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.067650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.067677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.067828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.067853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.068039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.068064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.068241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.068267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.068420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.068447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.068625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.068652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.068796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.068821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.068987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.069013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.069187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.069212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.069362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.069387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.069558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.069583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.069740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.899 [2024-07-23 06:29:43.069765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.899 qpair failed and we were unable to recover it. 00:33:49.899 [2024-07-23 06:29:43.069969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.069995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.070145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.070171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.070373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.070399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.070572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.070597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.070753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.070778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.070925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.070950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.071116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.071142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.071294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.071320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.071523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.071549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.071714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.071740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.071912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.071938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.072104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.072129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.072303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.072329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.072504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.072533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.072707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.072733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.072940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.072966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.073141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.073166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.073342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.073367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.073539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.073564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.073723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.073748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.073927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.073953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.074134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.074159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.074358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.074383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.074529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.074556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.074757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.074783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.074984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.075010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.075174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.075200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.075347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.075374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.075521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.075548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.075696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.075723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.075895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.075920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.076097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.076122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.076292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.076317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.076494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.076520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.076675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.076701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.076843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.076868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.077017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.077042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.900 qpair failed and we were unable to recover it. 00:33:49.900 [2024-07-23 06:29:43.077294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.900 [2024-07-23 06:29:43.077319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.077469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.077495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.077677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.077704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.077853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.077879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.078048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.078074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.078249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.078275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.078430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.078456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.078611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.078641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.078843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.078868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.079038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.079063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.079212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.079237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.079417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.079443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.079654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.079680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.079830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.079855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.080039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.080064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.080214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.080239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.080438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.080467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.080656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.080681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.080862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.080887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.081028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.081053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.081255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.081280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.081423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.081449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.081624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.081650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.081794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.081819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.081973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.081998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.082195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.082219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.082369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.082394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.082568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.082593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.082777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.082803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.082973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.083000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.083203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.083229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.083373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.083398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.083547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.083574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.083766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.083792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.083965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.083991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.084244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.084269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.084440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.084466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.084640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.084667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.901 qpair failed and we were unable to recover it. 00:33:49.901 [2024-07-23 06:29:43.084808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.901 [2024-07-23 06:29:43.084833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.084982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.085009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.085158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.085184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.085358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.085382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.085529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.085554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.085739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.085766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.085942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.085967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.086178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.086204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.086343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.086369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.086541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.086567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.086751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.086777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.086927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.086952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.087119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.087144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.087310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.087335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.087480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.087505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.087657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.087683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.087863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.087888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.088043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.088068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.088237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.088262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.088437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.088462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.088605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.088635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.088787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.088812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.088985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.089011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.089189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.089215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.089389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.089415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.089567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.089594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.089784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.089809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.090011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.090035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.090206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.090232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.090375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.090401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.090603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.090633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.090826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.090851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.091029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.091054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.091229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.091254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.091423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.091448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.091623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.091649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.091821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.091846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.092020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.902 [2024-07-23 06:29:43.092045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.902 qpair failed and we were unable to recover it. 00:33:49.902 [2024-07-23 06:29:43.092212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.092237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.092438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.092463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.092637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.092663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.092860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.092886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.093032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.093058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.093231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.093256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.093407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.093437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.093628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.093658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.093821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.093846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.093997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.094024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.094208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.094234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.094384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.094409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.094585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.094611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.094823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.094849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.095026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.095051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.095193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.095220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.095398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.095424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.095570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.095595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.095751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.095777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.095929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.095953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.096100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.096125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.096326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.096351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.096526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.096551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.096751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.096777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.096923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.096949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.097119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.097144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.097315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.097341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.097511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.097535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.097683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.097708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.097881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.097906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.098060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.098086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.098339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.903 [2024-07-23 06:29:43.098365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.903 qpair failed and we were unable to recover it. 00:33:49.903 [2024-07-23 06:29:43.098520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.098545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.098716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.098742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.098892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.098917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.099124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.099149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.099323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.099348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.099551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.099576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.099752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.099778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.099923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.099948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.100101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.100127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.100300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.100326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.100507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.100532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.100698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.100724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.100899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.100924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.101078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.101105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.101280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.101306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.101479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.101509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.101712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.101738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.101909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.101942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.102096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.102121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.102320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.102345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.102494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.102519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.102712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.102738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.102886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.102912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.103091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.103116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.103289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.103314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.103465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.103490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.103644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.103683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.103826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.103853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.104055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.104081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.104224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.104250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.104395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.104420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.104634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.104665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.104821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.104847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.105101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.105126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.105331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.105356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.105606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.105637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.105837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.105863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.904 [2024-07-23 06:29:43.106040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.904 [2024-07-23 06:29:43.106065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.904 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.106241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.106266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.106443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.106468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.106730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.106756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.106904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.106930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.107109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.107135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.107289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.107314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.107485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.107510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.107657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.107684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.107855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.107880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.108026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.108052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.108203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.108228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.108425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.108450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.108596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.108628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.108811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.108836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.108992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.109018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.109162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.109187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.109336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.109361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.109509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.109542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.109698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.109725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.109901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.109926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.110102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.110127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.110296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.110321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.110470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.110494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.110703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.110729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.110903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.110928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.111098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.111124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.111275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.111300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.111471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.111495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.111669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.111694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.111896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.111921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.112094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.112119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.112300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.112326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.112525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.112550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.112724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.112750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.112931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.112956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.113100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.113125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.113321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.113346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.905 [2024-07-23 06:29:43.113497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.905 [2024-07-23 06:29:43.113522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.905 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.113702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.113728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.113980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.114005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.114148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.114174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.114327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.114353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.114525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.114551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.114731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.114756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.114944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.114970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.115171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.115196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.115357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.115382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.115562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.115587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.115754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.115782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.115951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.115977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.116119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.116145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.116321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.116346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.116511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.116537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.116691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.116718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.116867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.116892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.117075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.117101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.117302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.117328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.117498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.117528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.117703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.117730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.117881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.117906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.118054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.118080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.118224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.118249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.118421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.118445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.118600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.118634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.118775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.118800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.118972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.118997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.119170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.119195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.119371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.119399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.119554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.119580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.119752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.119779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.119927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.119952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.120131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.120157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.120303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.120329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.120513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.120538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.120716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.906 [2024-07-23 06:29:43.120742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.906 qpair failed and we were unable to recover it. 00:33:49.906 [2024-07-23 06:29:43.120895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.120921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.121125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.121150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.121297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.121322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.121495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.121520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.121669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.121696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.121841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.121868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.122046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.122071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.122249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.122275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.122444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.122470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.122652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.122677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.122874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.122900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.123057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.123082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.123255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.123281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.123438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.123464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.123637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.123665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.123841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.123866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.124046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.124071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.124249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.124274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.124526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.124551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.124726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.124751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.124898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.124923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.125126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.125151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.125402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.125431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.125604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.125647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.125846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.125872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.126057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.126082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.126256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.126281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.126477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.126502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.126657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.126692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.126838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.126864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.127050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.127076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.127275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.127301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.127478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.127504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.127674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.127700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.127874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.127899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.128042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.128068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.128211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.128238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.128411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.907 [2024-07-23 06:29:43.128437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.907 qpair failed and we were unable to recover it. 00:33:49.907 [2024-07-23 06:29:43.128691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.128717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.128890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.128916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.129071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.129097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.129272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.129297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.129477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.129502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.129676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.129701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.129877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.129902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.130046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.130071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.130216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.130242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.130418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.130443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.130609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.130639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.130790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.130815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.130989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.131014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.131213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.131238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.131415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.131440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.131616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.131642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.131804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.131830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.132011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.132036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.132190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.132215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.132386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.132412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.132557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.132582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.132787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.132813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.132958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.132984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.133153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.133178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.133324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.133353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.133526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.133551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.133734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.133760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.133964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.133989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.134166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.134192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.134362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.134389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.134530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.134555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.134760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.908 [2024-07-23 06:29:43.134786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.908 qpair failed and we were unable to recover it. 00:33:49.908 [2024-07-23 06:29:43.134963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.134989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.135141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.135166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.135333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.135359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.135525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.135551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.135695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.135722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.135899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.135925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.136125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.136150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.136322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.136347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.136499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.136525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.136698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.136723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.136896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.136921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.137091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.137117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.137266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.137292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.137462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.137487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.137684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.137710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.137884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.137909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.138082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.138108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.138284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.138309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.138474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.138499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.138682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.138708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.138861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.138888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.139096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.139121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.139301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.139326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.139474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.139500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.139712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.139738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.139874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.139900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.140082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.140108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.140252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.140278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.140454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.140479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.140657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.140683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.140866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.140891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.141089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.141114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.141288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.141318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.141465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.141491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.141665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.141691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.141868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.141893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.142044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.142069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.909 [2024-07-23 06:29:43.142248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.909 [2024-07-23 06:29:43.142274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.909 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.142528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.142553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.142731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.142757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.142932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.142958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.143160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.143185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.143363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.143388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.143565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.143591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.143744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.143769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.143948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.143974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.144134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.144160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.144339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.144364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.144509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.144536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.144714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.144740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.144912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.144937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.145085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.145110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.145289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.145314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.145567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.145592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.145776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.145801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.145976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.146001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.146176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.146202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.146353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.146379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.146580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.146605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.146760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.146786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.146938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.146964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.147115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.147140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.147312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.147338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.147486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.147511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.147709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.147735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.147909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.147935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.148087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.148112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.148255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.148279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.148455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.148480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.148678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.148703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.148853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.148878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.149052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.149078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.149252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.149284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.149456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.149482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.149652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.149678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.910 [2024-07-23 06:29:43.149929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.910 [2024-07-23 06:29:43.149954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.910 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.150125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.150150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.150322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.150348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.150496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.150521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.150698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.150724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.150876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.150901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.151049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.151074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.151223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.151249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.151444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.151470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.151618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.151643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.151789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.151814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.151992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.152017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.152187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.152212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.152412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.152437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.152586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.152611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.152799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.152824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.152968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.152993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.153192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.153217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.153397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.153422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.153562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.153588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.153777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.153803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.154056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.154082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.154263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.154289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.154442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.154467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.154676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.154702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.154849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.154874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.155016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.155042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.155216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.155243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.155441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.155467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.155622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.155648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.155819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.155844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.155996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.156021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.156198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.156223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.156392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.156417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.156566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.156591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.156753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.156779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.156952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.156977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.157152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.911 [2024-07-23 06:29:43.157181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.911 qpair failed and we were unable to recover it. 00:33:49.911 [2024-07-23 06:29:43.157323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.157348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.157522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.157547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.157734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.157759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.157963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.157987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.158162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.158187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.158367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.158393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.158564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.158589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.158776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.158803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.158955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.158980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.159181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.159206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.159355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.159380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.159531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.159555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.159740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.159766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.159922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.159947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.160125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.160152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.160407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.160432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.160604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.160635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.160808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.160833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.160989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.161015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.161174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.161199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.161345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.161370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.161521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.161546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.161746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.161772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.161943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.161969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.162136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.162161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.162334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.162360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.162517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.162543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.162694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.162720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.162899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.162924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.163100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.163125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.163264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.163291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.163461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.163487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.163663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.163689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.163846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.163871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.164044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.912 [2024-07-23 06:29:43.164070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.912 qpair failed and we were unable to recover it. 00:33:49.912 [2024-07-23 06:29:43.164216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.164242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.164416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.164440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.164611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.164641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.164846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.164872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.165047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.165076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.165228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.165255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.165430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.165455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.165641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.165669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.165816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.165842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.166046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.166071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.166252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.166277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.166426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.166452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.166627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.166654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.166798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.166824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.167006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.167031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.167197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.167222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.167403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.167428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.167576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.167601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.167825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.167851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.168017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.168042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.168212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.168237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.168436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.168462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.168611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.168644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.168796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.168821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.168995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.169020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.169194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.169219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.169417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.169442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.169641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.169667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.169840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.169866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.170042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.170068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.170213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.170239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.170438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.170463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.170610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.170655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.170829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.170854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.171003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.171029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.171184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.171211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.913 qpair failed and we were unable to recover it. 00:33:49.913 [2024-07-23 06:29:43.171354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.913 [2024-07-23 06:29:43.171380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.171578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.171603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.171784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.171810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.171986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.172012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.172181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.172207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.172356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.172382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.172555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.172581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.172760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.172786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.172965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.172995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.173202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.173226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.173377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.173402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.173551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.173575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.173761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.173787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.173940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.173965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.174117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.174142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.174313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.174337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.174512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.174538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.174686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.174712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.174973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.174998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.175172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.175197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.175349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.175374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.175544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.175570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.175791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.175816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.175996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.176022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.176199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.176224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.176390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.176414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.176612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.176642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.176839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.176864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.177018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.177044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.177214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.177239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.177389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.177414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.177587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.177619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.177769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.177794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.177959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.177984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.178184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.178209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.178382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.178407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.178557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.178582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.178777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.178803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.178949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.178975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.179142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.179168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.179343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.914 [2024-07-23 06:29:43.179369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.914 qpair failed and we were unable to recover it. 00:33:49.914 [2024-07-23 06:29:43.179537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.179562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.179745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.179771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.179918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.179944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.180198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.180224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.180474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.180500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.180701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.180727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.180873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.180899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.181095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.181124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.181304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.181329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.181500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.181525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.181696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.181723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.181976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.182001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.182203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.182228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.182427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.182452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.182632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.182657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.182809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.182834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.183084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.183108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.183310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.183335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.183486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.183511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.183682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.183707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.183875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.183900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.184058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.184083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.184254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.184279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.184448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.184472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.184645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.184672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.184870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.184896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.185042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.185066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.185246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.185271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.185440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.185465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.185634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.185660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.185828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.185853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.186029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.186054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.186196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.186221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.186393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.186418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.186635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.186666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.186823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.186848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.187024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.187049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.187191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.187217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.187385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.187411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.187587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.187619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.187770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.915 [2024-07-23 06:29:43.187795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.915 qpair failed and we were unable to recover it. 00:33:49.915 [2024-07-23 06:29:43.187949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.187976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.188150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.188177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.188389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.188413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.188599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.188630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.188811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.188837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.189006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.189033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.189205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.189231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.189386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.189411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.189559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.189585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.189770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.189796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.189941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.189967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.190117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.190142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.190286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.190312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.190465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.190491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.190645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.190679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.190854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.190879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.191023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.191048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.191247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.191272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.191416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.191442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:49.916 [2024-07-23 06:29:43.191620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.916 [2024-07-23 06:29:43.191645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:49.916 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-23 06:29:43.191824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-23 06:29:43.191850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-23 06:29:43.192000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-23 06:29:43.192026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-23 06:29:43.192178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-23 06:29:43.192203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-23 06:29:43.192371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-23 06:29:43.192397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-23 06:29:43.192568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-23 06:29:43.192593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-23 06:29:43.192786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-23 06:29:43.192811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-23 06:29:43.192982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.188 [2024-07-23 06:29:43.193007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-23 06:29:43.193152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.193177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.193349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.193374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.193545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.193571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.193726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.193752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.193925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.193950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.194123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.194148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.194324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.194355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.194558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.194583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.194805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.194831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.194983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.195008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.195194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.195219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.195418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.195443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.195622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.195648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.195821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.195846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.195993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.196018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.196169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.196194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.196366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.196391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.196539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.196564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.196714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.196739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.196883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.196908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.197092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.197119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.197265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.197291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.197488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.197514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.197687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.197713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.197861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.197886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.198062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.198087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.198225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.198250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.198422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.198447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.198628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.198655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.198833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.198858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.198999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.199024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-23 06:29:43.199198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.189 [2024-07-23 06:29:43.199223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.199368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.199392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.199599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.199629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.199779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.199804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.199954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.199979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.200174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.200199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.200367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.200393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.200566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.200592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.200797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.200822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.200990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.201015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.201161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.201186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.201359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.201384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.201561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.201586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.201752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.201779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.201982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.202008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.202161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.202190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.202364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.202389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.202539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.202564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.202751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.202776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.202924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.202949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.203146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.203171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.203346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.203371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.203517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.203542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.203693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.203719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.203898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.203922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.204091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.204116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.204285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.204310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.204459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.204484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.204682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.204707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.204865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.204891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.205039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.205065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.205235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.205260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.205435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.205460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.205616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.205642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.205897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.190 [2024-07-23 06:29:43.205922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.190 qpair failed and we were unable to recover it. 00:33:50.190 [2024-07-23 06:29:43.206125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.206150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.206303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.206328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.206496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.206522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.206675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.206702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.206856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.206882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.207053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.207080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.207221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.207246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.207427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.207453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.207600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.207630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.207809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.207836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1892670 Killed "${NVMF_APP[@]}" "$@" 00:33:50.191 [2024-07-23 06:29:43.208006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.208033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.208204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.208229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.208412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.208438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:50.191 [2024-07-23 06:29:43.208607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.208637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:50.191 [2024-07-23 06:29:43.208788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.208815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.209003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.209029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.209204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.209231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:50.191 [2024-07-23 06:29:43.209420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.209447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.191 [2024-07-23 06:29:43.209598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.209632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.209781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.209806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.209981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.210006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.210210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.210236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.210488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.210513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.210684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.210710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.210889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.210923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.211068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.211093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.211265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.211290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.211456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.211481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.211623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.211649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.211804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.211830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.212010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.191 [2024-07-23 06:29:43.212035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.191 qpair failed and we were unable to recover it. 00:33:50.191 [2024-07-23 06:29:43.212248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.212274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.212480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.212504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.212677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.212703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.212906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.212931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.213132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.213157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.213411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.213436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.213644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.213670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.213856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.213882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.214043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.214068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.214245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.214271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1893219 00:33:50.192 [2024-07-23 06:29:43.214443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.214470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1893219 00:33:50.192 [2024-07-23 06:29:43.214648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.214687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.214840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.214866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1893219 ']' 00:33:50.192 [2024-07-23 06:29:43.215051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.192 [2024-07-23 06:29:43.215077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:50.192 [2024-07-23 06:29:43.215253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.215280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.192 [2024-07-23 06:29:43.215480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.215505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b9 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:50.192 0 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.192 [2024-07-23 06:29:43.215762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.215789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.215965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.215992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.216173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.216198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.216348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.216374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.216574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.216599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.216765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.216792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.217001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.217027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.217179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.217205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.217457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.217482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.217660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.217691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.217843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.217870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.192 qpair failed and we were unable to recover it. 00:33:50.192 [2024-07-23 06:29:43.218089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.192 [2024-07-23 06:29:43.218114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.218296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.218321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.218507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.218532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.218738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.218764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.218908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.218934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.219133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.219159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.219331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.219358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.219530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.219555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.219723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.219753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.219899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.219925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.220097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.220123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.220322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.220347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.220520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.220546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.220722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.220748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.220923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.220948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.221101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.221127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.221323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.221349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.221530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.221555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.221784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.221810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.221997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.222022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.222165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.222191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.222376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.222401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.222609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.222641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.222821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.222846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.223010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.223034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.223226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.223251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.223421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.223446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.223625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.223651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.223809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.223834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.223982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.224007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.193 [2024-07-23 06:29:43.224177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.193 [2024-07-23 06:29:43.224202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.193 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.224384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.224410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.224578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.224603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.224813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.224839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.224986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.225011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.225192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.225217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.225391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.225417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.225591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.225620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.225774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.225799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.225973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.225998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.226198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.226224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.226366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.226391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.226544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.226571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.226755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.226781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.226938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.226964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.227138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.227164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.227341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.227366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.227514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.227539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.227717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.227750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.227929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.227954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.228130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.228155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.228303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.228328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.228503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.228529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.228706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.228732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.228886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.228911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.229060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.229086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.229234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.229259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.229437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.229462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.229601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.229631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.194 [2024-07-23 06:29:43.229798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.194 [2024-07-23 06:29:43.229823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.194 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.229994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.230020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.230216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.230241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.230387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.230412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.230600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.230632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.230816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.230842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.231014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.231039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.231206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.231232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.231376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.231402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.231580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.231605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.231825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.231851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.232048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.232074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.232248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.232273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.232453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.232478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.232625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.232652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.232791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.232816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.232974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.232999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.233195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.233220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.233396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.233420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.233568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.233593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.233751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.233777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.233929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.233954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.234124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.234149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.234351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.234376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.234518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.234543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.234718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.234744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.234890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.234916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.235104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.235129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.235304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.235329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.235501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.235530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.235697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.235723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.235877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.235902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.236104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.236129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.236304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.236329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.236506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.236530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.236709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.236735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.236877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.195 [2024-07-23 06:29:43.236903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.195 qpair failed and we were unable to recover it. 00:33:50.195 [2024-07-23 06:29:43.237074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.237099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.237297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.237322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.237460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.237487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.237639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.237664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.237805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.237830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.238024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.238049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.238227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.238252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.238401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.238426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.238642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.238667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.238869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.238895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.239061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.239087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.239256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.239281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.239423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.239447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.239601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.239632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.239813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.239838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.239994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.240020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.240168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.240194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.240393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.240418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.240607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.240637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.240829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.240854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.241010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.241036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.241208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.241234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.241400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.241424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.241593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.241624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.241800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.241836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.242006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.242031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.242201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.242225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.242397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.242421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.242592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.242639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.242816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.196 [2024-07-23 06:29:43.242842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.196 qpair failed and we were unable to recover it. 00:33:50.196 [2024-07-23 06:29:43.243047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.243073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.243234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.243259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.243473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.243503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.243685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.243711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.243859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.243883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.244030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.244056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.244234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.244259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.244433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.244459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.244630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.244655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.244889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.244919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.245126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.245151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.245346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.245372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.245548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.245572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.245736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.245761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.245930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.245955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.246104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.246129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.246281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.246305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.246511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.246536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.246714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.246741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.246912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.246937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.247111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.247136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.247278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.247302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.247476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.247501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.247655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.247682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.247852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.247876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.248056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.248082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.248253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.248279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.248454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.248479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.248658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.248683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.248867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.248891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.249055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.249079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.249256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.249280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.249450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.249474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.249647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.249672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.249844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.249868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.250068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.197 [2024-07-23 06:29:43.250092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.197 qpair failed and we were unable to recover it. 00:33:50.197 [2024-07-23 06:29:43.250267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.250292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.250466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.250491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.250639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.250664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.250847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.250872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.251021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.251046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.251211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.251237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.251413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.251443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.251641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.251666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.251820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.251845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.252055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.252079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.252270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.252295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.252463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.252488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.252631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.252656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.252882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.252908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.253080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.253106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.253264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.253289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.253460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.253485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.253662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.253687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.253836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.253861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.254024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.254049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.254224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.254248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.254404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.254428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.254606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.254637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.254808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.254834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.254979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.255004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.255202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.255227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.255408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.255433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.255581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.255607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.255760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.255785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.255980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.256005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.256263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.256288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.256459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.256484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.256642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.256667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.256871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.256896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.257038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.257064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.257236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.257262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.257404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.198 [2024-07-23 06:29:43.257431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.198 qpair failed and we were unable to recover it. 00:33:50.198 [2024-07-23 06:29:43.257585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.257610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.257787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.257812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.257991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.258017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.258169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.258194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.258373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.258399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.258544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.258571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.258757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.258783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.258986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.259011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.259185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.259210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.259414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.259443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.259580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.259605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.259798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.259824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.260002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.260027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.260176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.260201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.260375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.260400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.260547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.260573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.260779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.260804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.260956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.260983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.261182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.261207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.261377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.261402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.261580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.261605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.261815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.261840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.261984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.262009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.262228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.262253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.262397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.262433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.262689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.262715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.262822] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:50.199 [2024-07-23 06:29:43.262887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.262903] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.199 [2024-07-23 06:29:43.262912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.263098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.263123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.263308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.263336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.263478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.263503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.199 [2024-07-23 06:29:43.263691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.199 [2024-07-23 06:29:43.263717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.199 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.263865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.263890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.264062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.264087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.264265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.264290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.264466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.264491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.264671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.264697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.264871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.264898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.265039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.265066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.265242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.265267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.265446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.265471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.265646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.265673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.265818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.265843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.266044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.266069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.266274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.266299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.266470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.266497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.266644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.266671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.266850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.266875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.267073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.267098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.267278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.267480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.267506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.267711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.267737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.267943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.267968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.268122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.268147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.268321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.268346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.268523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.268547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.268724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.268750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.268895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.268921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.269101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.269127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.269267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.269292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.269449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.269475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.269656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.269682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.269832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.269862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.270015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.270040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.270218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.270244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.270443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.270469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.270668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.270694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.270859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.270883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.271055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.200 [2024-07-23 06:29:43.271080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.200 qpair failed and we were unable to recover it. 00:33:50.200 [2024-07-23 06:29:43.271255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.271282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.271454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.271480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.271649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.271674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.271821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.271845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.271994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.272019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.272214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.272239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.272408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.272434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.272608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.272638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.272815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.272841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.273009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.273035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.273182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.273206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.273355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.273379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.273520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.273547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.273723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.273748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.273902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.273935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.274121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.274147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.274323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.274348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.274491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.274517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.274688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.274713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.274869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.274895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.275100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.275126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.275275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.275301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.275459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.275484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.275641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.275666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.275847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.275872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.276048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.276073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.276244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.276269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.276442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.276467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.276624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.276649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.276800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.276825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.276976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.277002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.277143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.277168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.277369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.277394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.277545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.277570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.277741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.277768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.277970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.277995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.201 [2024-07-23 06:29:43.278164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.201 [2024-07-23 06:29:43.278189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.201 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.278366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.278392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.278544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.278569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.278743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.278768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.278912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.278937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.279078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.279104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.279276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.279302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.279480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.279505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.279650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.279676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.279881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.279907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.280047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.280073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.280249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.280274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.280422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.280448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.280624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.280649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.280851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.280876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.281054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.281079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.281223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.281247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.281446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.281471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.281649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.281674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.281848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.281872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.282045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.282070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.282213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.282239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.282389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.282413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.282586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.282627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.282807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.282837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.283007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.283032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.283204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.283228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.283403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.283428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.283607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.283638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.283780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.283806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.284002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.284026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.284205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.284229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.284404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.284429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.284609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.284639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.284818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.284844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.284995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.285020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.285205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.285229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.285405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.285430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.202 [2024-07-23 06:29:43.285584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.202 [2024-07-23 06:29:43.285609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.202 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.285788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.285813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.285988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.286013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.286183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.286208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.286357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.286382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.286531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.286557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.286733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.286759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.286946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.286972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.287258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.287283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.287458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.287485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.287657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.287683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.287840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.287865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.288043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.288068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.288242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.288268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.288444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.288469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.288676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.288711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.288899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.288924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.289100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.289126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.289300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.289324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.289478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.289503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.289650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.289676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.289824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.289850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.290024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.290050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.290223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.290250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.290393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.290417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.290603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.290633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.290815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.290844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.290982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.291008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.291173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.291198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.291371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.291396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.291585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.291610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.291789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.203 [2024-07-23 06:29:43.291815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.203 qpair failed and we were unable to recover it. 00:33:50.203 [2024-07-23 06:29:43.291972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.291997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.292171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.292195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.292347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.292373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.292555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.292580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.292731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.292758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.292940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.292964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.293135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.293161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.293365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.293390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.293569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.293594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.293785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.293811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.293959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.293984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.294153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.294179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.294335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.294361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.294533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.294558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.294734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.294759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.294935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.294960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.295136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.295161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.295337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.295362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.295508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.295533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.295704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.295730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.295908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.295933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.296111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.296137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.296334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.296359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.296531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.296556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.296703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.296729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.296902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.296927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.204 [2024-07-23 06:29:43.297074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.297100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.297278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.297302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.297444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.297470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.297640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.297666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.297817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.297841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.297982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.298007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.298177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.298202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.298375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.298400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.298605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.298634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.298781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.298806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.298950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.298975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.299148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.204 [2024-07-23 06:29:43.299174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.204 qpair failed and we were unable to recover it. 00:33:50.204 [2024-07-23 06:29:43.299352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.299377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.299551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.299578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.299755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.299780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.299964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.299990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.300161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.300185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.300335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.300359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.300535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.300560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.300747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.300773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.300846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:50.205 [2024-07-23 06:29:43.300948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.300972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.301175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.301199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.301352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.301377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.301569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.301594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.301746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.301771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.301949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.301974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.302157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.302182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.302357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.302382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.302529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.302554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.302711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.302736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.302887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.302913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.303085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.303111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.303284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.303309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.303444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.303468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.303605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.303642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.303840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.303865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.304049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.304073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.304240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.304265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.304413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.304438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.304586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.304611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.304796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.304820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.305060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.305085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.305260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.305286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.305439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.305464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.305617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.305644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.305844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.305868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.306045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.306069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.306246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.306273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.306464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.205 [2024-07-23 06:29:43.306490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.205 qpair failed and we were unable to recover it. 00:33:50.205 [2024-07-23 06:29:43.306670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.306696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.306865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.306890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.307036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.307061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.307242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.307267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.307425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.307450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.307631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.307656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.307856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.307881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.308053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.308078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.308254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.308279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.308451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.308476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.308647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.308672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.308815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.308840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.309024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.309049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.309217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.309241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.309412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.309436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.309608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.309638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.309814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.309841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.310017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.310042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.310197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.310221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.310359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.310384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.310588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.310629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.310785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.310810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.310988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.311013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.311181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.311206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.311377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.311401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.311580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.311627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.311805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.311830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.311987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.312012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.312162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.312187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.312342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.312366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.312516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.312542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.312727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.312753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.312924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.312949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.313120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.313144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.313321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.313346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.313523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.313548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.313730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.313756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.313929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.206 [2024-07-23 06:29:43.313954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.206 qpair failed and we were unable to recover it. 00:33:50.206 [2024-07-23 06:29:43.314100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.314125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.314326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.314351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.314554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.314579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.314725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.314750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.314903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.314928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.315070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.315094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.315243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.315268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.315421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.315446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.315625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.315650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.315801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.315826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.315971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.315996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.316196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.316220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.316417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.316441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.316632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.316658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.316814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.316841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.316991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.317016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.317190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.317214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.317389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.317414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.317589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.317621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.317764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.317788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.317937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.317961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.318108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.318133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.318328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.318353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.318521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.318547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.318721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.318748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.318933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.318959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.319134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.319160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.319332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.319361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.319543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.319568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.319754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.319779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.319929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.319954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.320104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.320129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.320332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.320358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.320508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.320532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.320677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.320703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.320858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.320882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.321054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.321077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.321261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.207 [2024-07-23 06:29:43.321287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.207 qpair failed and we were unable to recover it. 00:33:50.207 [2024-07-23 06:29:43.321432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.321456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.321638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.321662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.321836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.321862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.322014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.322040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.322233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.322258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.322410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.322435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.322617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.322643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.322784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.322808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.322965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.322990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.323137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.323162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.323332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.323356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.323551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.323576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.323784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.323810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.323977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.324003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.324174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.324200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.324371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.324395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.324540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.324564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.324738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.324765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.324934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.324959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.325121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.325146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.325299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.325324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.325498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.325523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.325701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.325726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.325895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.325919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.326074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.326098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.326243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.326269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.326468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.326493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.326699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.326724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.326864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.326888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.327074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.327103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.327300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.327326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.327503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.327528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.208 qpair failed and we were unable to recover it. 00:33:50.208 [2024-07-23 06:29:43.327690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.208 [2024-07-23 06:29:43.327715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.327896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.327922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.328117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.328142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.328294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.328320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.328494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.328519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.328687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.328713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.328887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.328912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.329057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.329084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.329236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.329261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.329404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.329429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.329588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.329619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.329782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.329808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.329978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.330002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.330175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.330200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.330348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.330374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.330547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.330573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.330759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.330784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 [2024-07-23 06:29:43.330788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.330959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.330984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.331187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.331211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.331416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.331441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.331591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.331625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.331784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.331810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.331958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.331982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.332163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.332189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.332339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.332365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.332538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.332563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.332747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.332772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.332944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.332968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.333116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.333142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.333352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.333378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.333524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.333549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.333702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.333728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.333881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.333909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.334086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.334112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.334314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.334340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.334483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.334510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.334692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.334731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.334886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.209 [2024-07-23 06:29:43.334912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.209 qpair failed and we were unable to recover it. 00:33:50.209 [2024-07-23 06:29:43.335088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.335113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.335284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.335310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.335481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.335507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.335668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.335695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.335874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.335899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.336048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.336074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.336285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.336310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.336456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.336483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.336663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.336689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.336864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.336890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.337061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.337087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.337264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.337289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.337467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.337497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.337674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.337700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.337843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.337868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.338041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.338067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.338207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.338232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.338411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.338437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.338590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.338620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.338794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.338819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.339016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.339042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.339210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.339235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.339404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.339429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.339578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.339605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.339778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.339804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.339951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.339977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.340157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.340183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.340355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.340380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.340531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.340557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.340735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.340762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.340965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.340990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.341171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.341196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.341372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.341398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.341557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.341582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.341729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.341754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.341929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.341954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.342122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.342148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.342320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.210 [2024-07-23 06:29:43.342346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.210 qpair failed and we were unable to recover it. 00:33:50.210 [2024-07-23 06:29:43.342541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.342567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.342783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.342810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.342961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.342987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.343162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.343187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.343337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.343362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.343546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.343572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.343740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.343766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.343945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.343971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.344143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.344169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.344324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.344350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.344531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.344557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.344737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.344765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.344912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.344937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.345115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.345141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.345316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.345349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.345528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.345555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.345698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.345724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.345897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.345922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.346107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.346131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.346284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.346308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.346467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.346492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.346691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.346717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.346870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.346895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.347070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.347095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.347275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.347300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.347487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.347512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.347690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.347716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.347891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.347915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.348057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.348081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.348287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.348312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.348462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.348489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.348648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.348673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.348819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.348845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.349043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.349068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.349209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.349234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.349433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.349457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.349632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.349659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.211 [2024-07-23 06:29:43.349834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.211 [2024-07-23 06:29:43.349859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.211 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.350031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.350056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.350228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.350253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.350404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.350430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.350637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.350664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.350815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.350840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.351017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.351041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.351239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.351265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.351442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.351467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.351662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.351689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.351887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.351912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.352062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.352086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.352263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.352288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.352464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.352489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.352630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.352656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.352809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.352834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.352981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.353007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.353178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.353207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.353358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.353384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.353553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.353578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.353758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.353783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.353932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.353958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.354171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.354197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.354370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.354395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.354546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.354571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.354724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.354749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.354927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.354952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.355099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.355124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.355320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.355345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.355546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.355572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.355730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.355756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.355964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.355990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.356201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.356226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.356401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.356427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.212 qpair failed and we were unable to recover it. 00:33:50.212 [2024-07-23 06:29:43.356595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.212 [2024-07-23 06:29:43.356624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.356802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.356826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.357010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.357035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.357209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.357234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.357407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.357432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.357607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.357637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.357780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.357804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.357953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.357978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.358162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.358187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.358364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.358389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.358578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.358604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.358761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.358785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.358934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.358960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.359141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.359166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.359364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.359389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.359567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.359593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.359755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.359783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.359935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.359960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.360113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.360139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.360285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.360310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.360459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.360485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.360661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.360687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.360863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.360887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.361063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.361092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.361271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.361296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.361433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.361457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.361611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.361644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.361817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.361842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.362004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.362030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.362176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.362201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.362383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.362409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.362553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.362578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.362747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.362772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.362918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.362944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.363127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.363153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.363327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.363352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.363495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.213 [2024-07-23 06:29:43.363521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.213 qpair failed and we were unable to recover it. 00:33:50.213 [2024-07-23 06:29:43.363682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.363709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.363881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.363907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.364125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.364151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.364298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.364322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.364463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.364488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.364639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.364665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.364859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.364884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.365081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.365105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.365264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.365288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.365459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.365485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.365661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.365687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.365855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.365879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.366057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.366081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.366281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.366306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.366511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.366536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.366680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.366707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.366882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.366919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.367063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.367088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.367296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.367321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.367466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.367491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.367634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.367660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.367860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.367886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.368049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.368075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.368250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.368276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.368428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.368453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.368611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.368644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.368873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.368903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.369101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.369128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.369307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.369334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.369483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.369508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.369704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.369730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.369875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.369900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.370082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.370107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.370305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.370329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.370476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.370501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.370669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.370696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.370843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.370868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.371068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.214 [2024-07-23 06:29:43.371093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.214 qpair failed and we were unable to recover it. 00:33:50.214 [2024-07-23 06:29:43.371271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.371296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.371477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.371502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.371684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.371709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.371881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.371909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.372122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.372148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.372306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.372331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.372479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.372504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.372655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.372680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.372827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.372852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.373020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.373046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.373195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.373220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.373370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.373395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.373560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.373584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.373738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.373768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.373975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.374001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.374213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.374254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.374438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.374465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.374628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.374655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.374874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.374900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.375064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.375089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.375267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.375293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.375470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.375496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.375682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.375709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.375881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.375906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.376089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.376116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.376271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.376297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.376442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.376468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.376637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.376664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.376832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.376857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.377025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.377052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.377203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.377229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.377380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.377406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.377579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.377610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.377772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.377797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.377979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.378005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.378208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.378234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.378408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.378434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.215 qpair failed and we were unable to recover it. 00:33:50.215 [2024-07-23 06:29:43.378574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.215 [2024-07-23 06:29:43.378604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.378781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.378807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.379005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.379035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.379176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.379202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.379368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.379394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.379549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.379579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.379795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.379821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.380029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.380055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.380230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.380257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.380438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.380465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.380649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.380675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.380823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.380848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.381031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.381057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.381235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.381261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.381407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.381432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.381578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.381608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.381790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.381817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.381962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.381988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.382134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.382159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.382338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.382364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.382510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.382536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.382716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.382744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.382990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.383016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.383228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.383254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.383418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.383445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.383627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.383653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.383833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.383858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.384076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.384101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.384255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.384284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.384466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.384494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.384646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.384673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.384820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.384845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.385004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.385035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.385190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.385216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.385370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.385395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.385626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.385668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.385859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.385886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.386044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.386070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.216 [2024-07-23 06:29:43.386249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.216 [2024-07-23 06:29:43.386275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.216 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.386454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.386479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.386655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.386682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.386860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.386887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.387045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.387071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.387226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.387252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.387429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.387456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.387640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.387667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.387852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.387878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.388054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.388080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.388228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.388254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.388404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.388437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.388620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.388647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.388844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.388871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.389053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.389079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.389285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.389311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.389485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.389511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.389691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.389719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.389873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.389899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.390059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.390085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.390285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.390310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.390465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.390492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.390644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.390675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.390830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.390857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.391040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.391068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.391244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.391270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.391416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.391445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.391650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.391677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.391828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.391854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.392043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.392069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.392265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.392290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.392494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.392520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.392704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.217 [2024-07-23 06:29:43.392730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.217 qpair failed and we were unable to recover it. 00:33:50.217 [2024-07-23 06:29:43.392884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.392918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.393063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.393092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.393241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.393267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.393445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.393470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.393650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.393676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.393819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.393845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.394002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.394028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.394195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.394220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.394374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.394400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.394575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.394600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.394778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.394803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.394957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.394982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.395154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.395180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.395355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.395380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.395598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.395638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.395815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.395841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.396038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.396064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.396208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.396234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.396408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.396434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.396584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.396610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.396777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.396804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.396995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.397021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.397195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.397220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.397371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.397396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.397538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.397564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.397780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.397807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.397952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.397980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.398137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.398162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.398335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.398362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.398526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.398552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.398735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.398761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.398924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.398950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.399124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.399149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.399310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.399336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.399484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.399509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.399687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.399713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.399854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.399880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.400054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.218 [2024-07-23 06:29:43.400080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.218 qpair failed and we were unable to recover it. 00:33:50.218 [2024-07-23 06:29:43.400251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.400276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.400462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.400488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.400660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.400687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.400865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.400895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.401080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.401106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.401280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.401306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.401484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.401510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.401697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.401723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.401923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.401948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.402106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.402131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.402309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.402335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.402537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.402562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.402724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.402750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.402924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.402950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.403120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.403146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.403316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.403342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.404173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.404202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.404401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.404436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.404627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.404653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.404833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.404864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.405052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.405078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.405801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.405832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.406022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.406049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.406898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.406939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.407103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.407129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.407838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.407867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.408049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.408076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.408239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.408265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.408439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.408465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.408649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.408676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.408854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.408896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.409094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.409121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.409302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.409328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.409509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.409536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.409698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.409725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.409877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.409903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.410059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.410085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.410262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.219 [2024-07-23 06:29:43.410289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.219 qpair failed and we were unable to recover it. 00:33:50.219 [2024-07-23 06:29:43.410468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.410493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.410670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.410697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.410899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.410932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.411113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.411138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.411282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.411307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.411491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.411516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.411698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.411724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.411907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.411932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.412131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.412156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.412345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.412371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.412543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.412568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.412780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.412806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.412949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.412973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.413125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.413151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.413298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.413325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.413509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.413535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.413703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.413729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.413880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.413905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.414082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.414109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.414287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.414318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.414498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.414523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.414745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.414771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.414921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.414946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.415117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.415143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.415287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.415312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.415496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.415521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.415674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.415700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.415873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.415899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.416048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.416074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.416278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.416303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.416478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.416513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.416676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.416702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.416875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.416900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.417049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.417075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.417281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.417307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.417492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.417518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.417695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.417722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.220 [2024-07-23 06:29:43.417895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.220 [2024-07-23 06:29:43.417921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.220 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.418123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.418149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.418306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.418333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.418511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.418537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.418687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.418713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.418870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.418896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.419037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.419071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.419244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.419271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.419445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.419471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.419736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.419766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.419944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.419970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.420121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.420146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.420292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.420318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.420491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.420517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.420722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.420748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.420929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.420954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.421128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.421153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.421339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.421365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.421534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.421560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.421732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.421758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.421942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.421967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.422147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.422172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.422431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.422456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.422610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.422652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.422828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.422854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.423012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.423038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.423188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.423213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.423353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.423379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.423537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.423578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.423770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.423810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.423904] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.221 [2024-07-23 06:29:43.423948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.221 [2024-07-23 06:29:43.423964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.221 [2024-07-23 06:29:43.423963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.423977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.221 [2024-07-23 06:29:43.423989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.221 [2024-07-23 06:29:43.423990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.424046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:50.221 [2024-07-23 06:29:43.424160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.424099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:50.221 [2024-07-23 06:29:43.424185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.424124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:50.221 [2024-07-23 06:29:43.424127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:50.221 [2024-07-23 06:29:43.424354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.424380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.424563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.424594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.424760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.424786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.221 [2024-07-23 06:29:43.424937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.221 [2024-07-23 06:29:43.424963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.221 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.425202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.425228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.425393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.425419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.425595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.425637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.425801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.425827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.426002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.426028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.426178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.426205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.426357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.426383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.426539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.426565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.426748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.426774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.426934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.426961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.427114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.427140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.427305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.427333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.427620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.427648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.427822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.427848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.428032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.428060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.428248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.428274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.428452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.428479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.428642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.428668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.428812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.428839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.429001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.429027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.429227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.429253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.429399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.429426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.429620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.429646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.429823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.429849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.430022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.430048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.430222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.430248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.430394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.430420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.430585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.430632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.430805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.430834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.431005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.431031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.431179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.222 [2024-07-23 06:29:43.431204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.222 qpair failed and we were unable to recover it. 00:33:50.222 [2024-07-23 06:29:43.431459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.431484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.431646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.431672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.431819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.431846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.432021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.432046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.432193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.432219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.432376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.432401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.432554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.432584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.432736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.432763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.432902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.432930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.433091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.433117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.433260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.433296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.433447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.433472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.433633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.433659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.433826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.433852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.434010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.434035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.434211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.434236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.434398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.434424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.434576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.434602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.434784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.434811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.434974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.435000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.435169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.435195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.435335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.435361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.435508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.435534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.435695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.435720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.435872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.435897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.436097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.436126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.436272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.436297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.436450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.436476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.436636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.436661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.436811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.436836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.436980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.437007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.437151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.437177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.437348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.437374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.437548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.437574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.437780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.437807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.437956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.437987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.438149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.438175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.438320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.438345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.438488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.438514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.438686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.438731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.223 [2024-07-23 06:29:43.438888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.223 [2024-07-23 06:29:43.438922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.223 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.439114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.439140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.439281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.439307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.439475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.439501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.439655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.439681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.439832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.439858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.440012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.440038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.440181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.440206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.440497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.440522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.440710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.440736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.440885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.440913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.441111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.441137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.441297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.441323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.441472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.441497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.441692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.441719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.441866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.441892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.442143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.442177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.442351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.442376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.442525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.442551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.442736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.442763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.442923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.442948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.443098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.443123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.443270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.443295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.443484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.443510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.443662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.443688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.443841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.443868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.444108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.444134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.444382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.444407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.444590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.444627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.444776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.444802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.444969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.444995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.445167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.445193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.445335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.445360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.445513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.445539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.445727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.445757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.445947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.445972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.446165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.446191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.446336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.446374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.446525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.446551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.446707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.446733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.446876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.446909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.224 [2024-07-23 06:29:43.447071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.224 [2024-07-23 06:29:43.447099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.224 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.447248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.447274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.447427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.447453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.447622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.447648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.447820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.447845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.448006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.448031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.448185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.448211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.448394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.448420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.448578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.448608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.448767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.448792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.448961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.448986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.449237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.449262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.449464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.449489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.449648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.449673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.449864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.449889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.450042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.450068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.450214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.450243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.450399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.450424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.450563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.450588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.450765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.450791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.450933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.450962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.451134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.451159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.451313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.451339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.451538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.451563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.451726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.451752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.451925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.451950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.452150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.452175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.452332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.452357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.452533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.452558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.452727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.452753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.453003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.453028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.453190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.453216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.453391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.453416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.453694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.453720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.453885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.453910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.454070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.454103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.454251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.454284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.454447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.454472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.454656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.454683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.454836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.454862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.455007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.455034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.455188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.225 [2024-07-23 06:29:43.455213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.225 qpair failed and we were unable to recover it. 00:33:50.225 [2024-07-23 06:29:43.455368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.455392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.455554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.455579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.455770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.455796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.455975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.456009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.456159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.456184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.456436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.456470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.456680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.456706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.456870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.456895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.457050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.457081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.457253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.457279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.457420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.457446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.457629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.457655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.457797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.457822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.457977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.458002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.458161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.458185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.458340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.458367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.458623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.458649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.458795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.458821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.458970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.458995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.459168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.459194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.459372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.459398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.459554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.459580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.459756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.459782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.459917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.459943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.460109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.460134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.460311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.460336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.460512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.460538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.460727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.460753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.460898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.460923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.461093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.461118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.461290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.461315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.461498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.461524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.461682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.461708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.461873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.461898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.462043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.462068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.462246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.462272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.226 qpair failed and we were unable to recover it. 00:33:50.226 [2024-07-23 06:29:43.462433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.226 [2024-07-23 06:29:43.462458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.462606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.462638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.462819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.462845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.462994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.463020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.463176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.463201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.463375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.463400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.463588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.463633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.463785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.463812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.463957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.463982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.464134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.464160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.464305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.464330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.464498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.464524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.464704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.464730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.464879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.464904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.465058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.465083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.465237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.465262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.465414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.465438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.465610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.465642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.465792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.465818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.465963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.465988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.466139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.466164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.466353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.466378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.466554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.466579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.466722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.466747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.466892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.466917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.467066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.467091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.467243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.467273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.467455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.467481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.467649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.467675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.467848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.467874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.468023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.468048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.468217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.468242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.468422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.468448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.468586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.468612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.468771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.468797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.468974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.469000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.469149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.469174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.469339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.469368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.469514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.469540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.469722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.469747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.469888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.469913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.470081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.227 [2024-07-23 06:29:43.470107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.227 qpair failed and we were unable to recover it. 00:33:50.227 [2024-07-23 06:29:43.470389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.470415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.470573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.470598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.470773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.470798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.470948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.470973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.471171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.471196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.471338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.471363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.471526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.471551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.471711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.471737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.471891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.471916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.472102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.472127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.472302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.472329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.472507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.472536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.472731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.472757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.472904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.472929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.473110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.473135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.473290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.473316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.473497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.473529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.473683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.473710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.473867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.473893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.474040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.474067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.474216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.474242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.474397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.474423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.474589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.474624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.474780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.474805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.474957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.474984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.475130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.475155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.475316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.475341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.475521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.475546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.475702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.475729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.475898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.475923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.476072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.476097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.476274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.476299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.476483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.476713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.476739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.476890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.476918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.477112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.477137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.477299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.477324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.477482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.477507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.477658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.477690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.477877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.477902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.478090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.478115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.228 qpair failed and we were unable to recover it. 00:33:50.228 [2024-07-23 06:29:43.478285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.228 [2024-07-23 06:29:43.478310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.478460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.478485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.478657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.478682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.478829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.478854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.479022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.479048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.479234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.479259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.479414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.479440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.479635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.479679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.479843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.479877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.480029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.480055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.480229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.480255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.480424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.480451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.480623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.480650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.480795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.480821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.480967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.480993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.481142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.481168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.481343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.481368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.481545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.481770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.481797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.481988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.482014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.482155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.482182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.482351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.482377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.482558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.482585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.482750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.482776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.482951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.482977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.483133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.483159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.483300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.483326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.483476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.483502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.483652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.483689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.483855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.483881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.484061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.484086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.484231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.484257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.484404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.484431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.484573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.484600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.484780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.484819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.485004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.485035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.485228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.485254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.485424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.485449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.485596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.485646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.485822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.485847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.485984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.486009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.486158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.486183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.486357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.486381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.229 [2024-07-23 06:29:43.486557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.229 [2024-07-23 06:29:43.486582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.229 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.486768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.486793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.486932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.486958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.487110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.487134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.487270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.487294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.487439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.487465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.487626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.487652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.487838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.487863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.488023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.488048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.488209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.488234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.488387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.488412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.488563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.488588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.488742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.488767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.488907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.488932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.489077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.489103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.489243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.489268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.489414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.489438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.489585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.489609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.489762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.489788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.489969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.489998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.490154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.490179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.490356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.490381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.490526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.490552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.490713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.490739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.490912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.490937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.491083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.491108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.491249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.491274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.491419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.491444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.491629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.491656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.491814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.230 [2024-07-23 06:29:43.491840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.230 qpair failed and we were unable to recover it. 00:33:50.230 [2024-07-23 06:29:43.492012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.492038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.492227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.492252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.492403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.492436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.492617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.492643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.492798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.492823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.492959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.492983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.493160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.493185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.493333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.493358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.493506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.493531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.493718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.493745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.493916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.493941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.494118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.494143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.494286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.494311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.494483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.494508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.494667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.494697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.494872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.494897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.495043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.495072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.495270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.495295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.495450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.495476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.495644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.495678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.495834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.495860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.496023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.496050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.496269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.496295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.496471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.496496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.496649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.496677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.496849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.496875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.497068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.497093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.497233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.497259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.497398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.497424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.497566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.497591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.497788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.497830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.498132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.498160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.498309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.498336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.498515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.498542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.498723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.498750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.498906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.498933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.499096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.499122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.499296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.231 [2024-07-23 06:29:43.499323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.231 qpair failed and we were unable to recover it. 00:33:50.231 [2024-07-23 06:29:43.499461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.499486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.499649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.499680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.499907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.499934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.500106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.500132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.500279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.500304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.500490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.500521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.500684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.500711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.500867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.500893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.501043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.501069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.501256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.501282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.501439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.501465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.501646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.501681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.501852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.501878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.502023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.502049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.502215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.502241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.502517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.502543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.502722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.502748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.502953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.502979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.503155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.503180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.503344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.503370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.503640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.503668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.503820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.503846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.504002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.504028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.504174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.504199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.504356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.504382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.504560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.504586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.504796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.504822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.504995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.505021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.505172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.505198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.505401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.505427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.505578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.505605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.505776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.505802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.506007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.506048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.506232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.506259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.506410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.506436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.506590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.506622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.506791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.506816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.506994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.232 [2024-07-23 06:29:43.507019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.232 qpair failed and we were unable to recover it. 00:33:50.232 [2024-07-23 06:29:43.507162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.507187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.507334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.507361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.507527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.507552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.507749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.507790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.507975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.508003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.508209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.508235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.508384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.508410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.508551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.508582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.508751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.508779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.508928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.508954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.509161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.509187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.509331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.509357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.509519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.509545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.509710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.509738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.509888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.509915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.510089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.510115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.510252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.510278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.510453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.510479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.510624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.510650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.510795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.510822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.510969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.510996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.511159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.511186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.511361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.511387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.511590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.511622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.511806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.511833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.511973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.511999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.512172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.512198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.512377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.512406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.512560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.512586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.512742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.512768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.512943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.512970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.513116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.513142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.513303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.513329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.513473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.513498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.513703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.513729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.513896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.513922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.514091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.514116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.233 [2024-07-23 06:29:43.514292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.233 [2024-07-23 06:29:43.514317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.233 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.514508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.514533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.514717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.514743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.514909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.514934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.515107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.515133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.515272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.515297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.515454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.515479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.515624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.515649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.515793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.515820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.515990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.516015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.516157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.516187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.516345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.516370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.516537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.516562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.516733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.516759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.516908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.516933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.517079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.517105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.517256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.517281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.517458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.517483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.517633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.517659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.517804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.517829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.517984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.518011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.518167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.518192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.518391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.518416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.518569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.518595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.518757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.518783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.518925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.518950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.519115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.519141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.234 [2024-07-23 06:29:43.519277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.234 [2024-07-23 06:29:43.519302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.234 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-23 06:29:43.519462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-23 06:29:43.519488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.497 qpair failed and we were unable to recover it. 00:33:50.497 [2024-07-23 06:29:43.519636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.497 [2024-07-23 06:29:43.519673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.519851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.519877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.520063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.520089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.520239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.520264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.520406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.520431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.520572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.520597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.520759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.520784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.520929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.520954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.521106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.521133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.521269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.521295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.521458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.521484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.521632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.521670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.521831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.521857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.522006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.522031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.522175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.522201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.522386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.522412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.522575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.522600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.522782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.522808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.522972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.522997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.523150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.523175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.523348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.523373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.523531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.523560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.523747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.523772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.523947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.523972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.524114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.524139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.524339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.524364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.524512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.524537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.524696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.524722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.524867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.524893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.525043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.525070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.525271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.525296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.525443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.525469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.525651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.525677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.525834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.525860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.526041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.526067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.526240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.526266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.526436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.526461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.526608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.498 [2024-07-23 06:29:43.526639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.498 qpair failed and we were unable to recover it. 00:33:50.498 [2024-07-23 06:29:43.526816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.526841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.527055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.527080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.527252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.527277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.527418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.527444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.527624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.527651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.527798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.527825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.527984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.528009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.528208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.528234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.528398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.528425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.528572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.528597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.528769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.528808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.529018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.529046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.529183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.529209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.529354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.529380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.529555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.529581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.529733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.529759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.529895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.529921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.530070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.530095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.530246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.530272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.530482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.530510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.530692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.530718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.530860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.530885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.531060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.531085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.531224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.531255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.531403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.531429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.531569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.531594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.531749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.531775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.531929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.531956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.532144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.532169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.532338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.532363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.532515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.532540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.532692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.532718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.532867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.532892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.533037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.533063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.533229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.533254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.533429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.533456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.533603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.533634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.499 [2024-07-23 06:29:43.533781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.499 [2024-07-23 06:29:43.533807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.499 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.533951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.533977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.534119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.534145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.534318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.534343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.534513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.534538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.534707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.534733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.534910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.534935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.535083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.535108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.535284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.535310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.535480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.535506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.535659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.535686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.535892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.535918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.536069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.536095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.536274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.536314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.536481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.536509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.536689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.536716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.536866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.536892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.537092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.537118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.537267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.537293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.537450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.537477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.537627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.537653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.537824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.537849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.537997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.538022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.538170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.538196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.538363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.538389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.538575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.538603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.538762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.538793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.538956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.538982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.539149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.539175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.539329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.539355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.539505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.539531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.539682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.539710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.539876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.539901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.540100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.540126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.540270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.540296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.540437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.540463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.540627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.540653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.540810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.500 [2024-07-23 06:29:43.540835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.500 qpair failed and we were unable to recover it. 00:33:50.500 [2024-07-23 06:29:43.541000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.541026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.541167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.541192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.541375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.541400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.541545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.541570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.541739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.541778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.541932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.541960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.542123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.542149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.542332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.542358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.542531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.542557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.542706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.542761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.542913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.542940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.543120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.543147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.543313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.543339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.543518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.543546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.543707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.543746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.543916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.543956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.544120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.544146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.544326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.544352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.544509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.544535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.544675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.544702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.544858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.544884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.545026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.545052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.545197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.545222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.545377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.545403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.545535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.545560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.545736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.545763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.545930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.545970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.546148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.546176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.546358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.546384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.546546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.546572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.546724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.546750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.546894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.546920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.547091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.547117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.547264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.547290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.547443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.547468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.547638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.547666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.547842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.547867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.548019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.501 [2024-07-23 06:29:43.548045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.501 qpair failed and we were unable to recover it. 00:33:50.501 [2024-07-23 06:29:43.548191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.548217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.548396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.548421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.548567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.548593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.548772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.548800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.548954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.548992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.549162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.549189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.549339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.549364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.549510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.549535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.549686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.549713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.549861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.549887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.550054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.550080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.550229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.550254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.550402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.550427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.550589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.550621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.550786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.550812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.550990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.551015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.551161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.551187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.551335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.551365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.551558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.551584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.551769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.551808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.551963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.551990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.552171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.552197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.552374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.552401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.552595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.552632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.552793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.552819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.552982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.553008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.553186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.553213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.553381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.553407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.553549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.553576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.502 qpair failed and we were unable to recover it. 00:33:50.502 [2024-07-23 06:29:43.553758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.502 [2024-07-23 06:29:43.553784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.553929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.553955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.554101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.554126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.554268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.554293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.554431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.554457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.554623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.554662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.554838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.554865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.555041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.555068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.555209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.555234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.555394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.555420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.555597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.555634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.555810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.555836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.555999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.556024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.556174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.556199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.556339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.556364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.556544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.556570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.556749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.556788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.556970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.556996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.557168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.557193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.557362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.557388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.557534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.557559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.557731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.557757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.557943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.557968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.558129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.558155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.558297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.558322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.558465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.558491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ea4b0 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.558656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.558694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.558848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.558875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.559024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.559051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.559200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.559225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.559393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.559418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.559581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.559607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.559771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.559797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.559972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.559998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.560136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.560161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.560326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.560351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.560509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.560534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.560689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.503 [2024-07-23 06:29:43.560715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.503 qpair failed and we were unable to recover it. 00:33:50.503 [2024-07-23 06:29:43.560861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.560886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.561036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.561064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.561228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.561254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.561405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.561431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.561589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.561620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.561786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.561812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.561951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.561976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.562139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.562164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.562330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.562355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.562527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.562552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.562724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.562750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.562892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.562917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.563082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.563108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.563266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.563292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.563443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.563468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.563639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.563664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.563805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.563830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.563978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.564009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:50.504 [2024-07-23 06:29:43.564172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.564198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.564366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.564392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.564571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.564597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.564744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.564770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe58000b90 with addr=10.0.0.2, port=4420 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.564945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.564984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:50.504 [2024-07-23 06:29:43.565149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.565179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.504 [2024-07-23 06:29:43.565357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.565385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.565537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.565563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.565731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.565759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.565902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.565929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.566068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.566094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.566283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.566310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 [2024-07-23 06:29:43.566475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.566501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:33:50.504 qpair failed and we were unable to recover it. 00:33:50.504 A controller has encountered a failure and is being reset. 00:33:50.504 [2024-07-23 06:29:43.566714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.504 [2024-07-23 06:29:43.566749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f8470 with addr=10.0.0.2, port=4420 00:33:50.504 [2024-07-23 06:29:43.566768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8470 is same with the state(5) to be set 00:33:50.504 [2024-07-23 06:29:43.566793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f8470 (9): Bad file descriptor 00:33:50.504 [2024-07-23 06:29:43.566811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.504 [2024-07-23 06:29:43.566825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.504 [2024-07-23 06:29:43.566840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.504 Unable to reset the controller. 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.504 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.505 Malloc0 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.505 [2024-07-23 06:29:43.619027] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.505 [2024-07-23 06:29:43.647274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.505 06:29:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1892807 00:33:51.445 Controller properly reset. 00:33:56.718 Initializing NVMe Controllers 00:33:56.718 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:56.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:56.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:56.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:56.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:56.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:56.718 Initialization complete. Launching workers. 00:33:56.718 Starting thread on core 1 00:33:56.718 Starting thread on core 2 00:33:56.718 Starting thread on core 3 00:33:56.718 Starting thread on core 0 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:56.718 00:33:56.718 real 0m10.700s 00:33:56.718 user 0m32.608s 00:33:56.718 sys 0m7.751s 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:56.718 ************************************ 00:33:56.718 END TEST nvmf_target_disconnect_tc2 00:33:56.718 ************************************ 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:56.718 rmmod nvme_tcp 00:33:56.718 rmmod nvme_fabrics 00:33:56.718 rmmod nvme_keyring 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1893219 ']' 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1893219 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1893219 ']' 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1893219 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1893219 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1893219' 00:33:56.718 killing process with pid 1893219 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1893219 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1893219 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.718 06:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.622 06:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:58.622 00:33:58.622 real 0m15.326s 00:33:58.622 user 0m57.836s 00:33:58.622 sys 0m10.226s 00:33:58.622 06:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:58.622 06:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:58.622 ************************************ 00:33:58.622 END TEST nvmf_target_disconnect 00:33:58.622 ************************************ 00:33:58.622 06:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:33:58.622 06:29:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:58.622 00:33:58.622 real 6m30.429s 00:33:58.622 user 16m58.186s 00:33:58.622 sys 1m27.794s 00:33:58.622 06:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:58.622 06:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.622 ************************************ 00:33:58.622 END TEST nvmf_host 00:33:58.622 ************************************ 00:33:58.622 06:29:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:58.622 00:33:58.622 real 27m8.500s 00:33:58.622 user 73m45.157s 00:33:58.622 sys 6m34.564s 00:33:58.622 06:29:51 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:58.622 06:29:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:58.622 ************************************ 00:33:58.622 END TEST nvmf_tcp 00:33:58.622 ************************************ 00:33:58.622 06:29:51 -- common/autotest_common.sh@1142 -- # return 0 00:33:58.622 06:29:51 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:33:58.622 06:29:51 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:58.622 06:29:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:58.622 06:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:58.622 06:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:58.622 ************************************ 00:33:58.622 START TEST spdkcli_nvmf_tcp 00:33:58.622 ************************************ 00:33:58.622 06:29:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:58.881 * Looking for test storage... 00:33:58.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:58.881 06:29:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:58.881 06:29:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:58.881 06:29:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:58.881 06:29:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.881 06:29:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.881 06:29:52 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1894411 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1894411 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1894411 ']' 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:58.882 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:58.882 [2024-07-23 06:29:52.056591] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:33:58.882 [2024-07-23 06:29:52.056696] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894411 ] 00:33:58.882 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.882 [2024-07-23 06:29:52.087199] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:58.882 [2024-07-23 06:29:52.114489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:58.882 [2024-07-23 06:29:52.203635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.882 [2024-07-23 06:29:52.203694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.141 06:29:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:59.141 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:59.141 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:59.141 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:59.141 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:59.141 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:59.141 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:59.141 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:59.141 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:59.141 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:59.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:59.141 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:59.141 ' 00:34:01.671 [2024-07-23 06:29:54.847269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.045 [2024-07-23 06:29:56.075574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:05.570 [2024-07-23 06:29:58.354814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:07.465 [2024-07-23 06:30:00.296923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:08.840 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:08.840 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:08.840 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:08.840 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:08.840 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:08.840 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:08.840 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:08.840 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:08.840 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:08.840 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:08.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:08.840 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:08.840 06:30:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:08.840 06:30:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:08.840 06:30:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:08.840 06:30:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:08.840 06:30:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:08.840 06:30:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:08.840 06:30:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:08.840 06:30:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:09.099 06:30:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:09.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:09.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:09.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:09.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:09.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:09.099 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:09.099 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:09.099 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:09.099 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:09.099 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:09.099 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:09.099 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:09.099 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:09.099 ' 00:34:14.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:14.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:14.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:14.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:14.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:14.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:14.365 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:14.365 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:14.365 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:14.365 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:14.365 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:14.365 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:14.365 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:14.365 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1894411 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1894411 ']' 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1894411 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1894411 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1894411' 00:34:14.365 killing process with pid 1894411 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1894411 00:34:14.365 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1894411 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1894411 ']' 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1894411 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1894411 ']' 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1894411 00:34:14.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1894411) - No such process 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1894411 is not found' 00:34:14.623 Process with pid 1894411 is not found 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:14.623 00:34:14.623 real 0m15.939s 00:34:14.623 user 0m33.686s 00:34:14.623 sys 0m0.844s 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.623 06:30:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.623 ************************************ 00:34:14.623 END TEST spdkcli_nvmf_tcp 00:34:14.623 ************************************ 00:34:14.623 06:30:07 -- common/autotest_common.sh@1142 -- # return 0 00:34:14.623 06:30:07 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:14.623 06:30:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:14.623 06:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.623 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:34:14.623 ************************************ 00:34:14.623 START TEST nvmf_identify_passthru 00:34:14.623 ************************************ 00:34:14.623 06:30:07 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:14.882 * Looking for test storage... 00:34:14.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:14.882 06:30:07 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.882 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.883 06:30:07 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.883 06:30:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.883 06:30:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.883 06:30:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.883 06:30:08 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.883 06:30:08 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.883 06:30:08 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.883 06:30:08 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:14.883 06:30:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.883 06:30:08 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.883 06:30:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:14.883 06:30:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:14.883 06:30:08 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:14.883 06:30:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:16.786 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:16.787 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:16.787 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:16.787 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:16.787 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:16.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:34:16.787 00:34:16.787 --- 10.0.0.2 ping statistics --- 00:34:16.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.787 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:34:16.787 00:34:16.787 --- 10.0.0.1 ping statistics --- 00:34:16.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.787 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:16.787 06:30:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:16.787 06:30:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.787 06:30:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:16.787 06:30:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:16.787 06:30:10 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:16.787 06:30:10 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:34:16.787 06:30:10 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:34:16.787 06:30:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:16.787 06:30:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:16.787 06:30:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:16.787 06:30:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:16.787 06:30:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:16.787 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.003 06:30:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:21.003 06:30:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:21.003 06:30:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:21.003 06:30:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:21.003 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.197 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:25.197 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.197 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.197 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1899536 00:34:25.197 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:25.197 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:25.197 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1899536 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1899536 ']' 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:25.197 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.197 [2024-07-23 06:30:18.514736] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:34:25.197 [2024-07-23 06:30:18.514822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.481 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.481 [2024-07-23 06:30:18.559469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:25.482 [2024-07-23 06:30:18.586308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:25.482 [2024-07-23 06:30:18.675215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.482 [2024-07-23 06:30:18.675278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.482 [2024-07-23 06:30:18.675292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.482 [2024-07-23 06:30:18.675303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.482 [2024-07-23 06:30:18.675313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.482 [2024-07-23 06:30:18.675403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.482 [2024-07-23 06:30:18.675425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.482 [2024-07-23 06:30:18.675482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:25.482 [2024-07-23 06:30:18.675484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.482 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:25.482 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:34:25.482 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:25.482 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.482 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.482 INFO: Log level set to 20 00:34:25.482 INFO: Requests: 00:34:25.482 { 00:34:25.482 "jsonrpc": "2.0", 00:34:25.482 "method": "nvmf_set_config", 00:34:25.482 "id": 1, 00:34:25.482 "params": { 00:34:25.482 "admin_cmd_passthru": { 00:34:25.482 "identify_ctrlr": true 00:34:25.482 } 00:34:25.482 } 00:34:25.482 } 00:34:25.482 00:34:25.482 INFO: response: 00:34:25.482 { 00:34:25.482 "jsonrpc": "2.0", 00:34:25.482 "id": 1, 00:34:25.482 "result": true 00:34:25.482 } 00:34:25.482 00:34:25.482 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.482 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:25.482 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.482 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.482 INFO: Setting log level to 20 00:34:25.482 INFO: Setting log level to 20 00:34:25.482 INFO: Log level set to 20 00:34:25.482 INFO: Log level set to 20 00:34:25.482 INFO: Requests: 00:34:25.482 { 00:34:25.482 "jsonrpc": "2.0", 00:34:25.482 "method": "framework_start_init", 00:34:25.482 "id": 1 00:34:25.482 } 00:34:25.482 00:34:25.482 INFO: Requests: 00:34:25.482 { 00:34:25.482 "jsonrpc": "2.0", 00:34:25.482 "method": "framework_start_init", 00:34:25.482 "id": 1 00:34:25.482 } 00:34:25.482 00:34:25.742 [2024-07-23 06:30:18.842999] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:25.742 INFO: response: 00:34:25.742 { 00:34:25.742 "jsonrpc": "2.0", 00:34:25.742 "id": 1, 00:34:25.742 "result": true 00:34:25.742 } 00:34:25.742 00:34:25.742 INFO: response: 00:34:25.742 { 00:34:25.742 "jsonrpc": "2.0", 00:34:25.742 "id": 1, 00:34:25.742 "result": true 00:34:25.742 } 00:34:25.742 00:34:25.742 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.742 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:25.742 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.742 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.742 INFO: Setting log level to 40 00:34:25.742 INFO: Setting log level to 40 00:34:25.742 INFO: Setting log level to 40 00:34:25.742 [2024-07-23 06:30:18.853153] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.742 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.742 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:25.742 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:25.742 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.742 06:30:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:25.742 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.742 06:30:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:29.021 Nvme0n1 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:29.021 [2024-07-23 06:30:21.747686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:29.021 [ 00:34:29.021 { 00:34:29.021 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:29.021 "subtype": "Discovery", 00:34:29.021 "listen_addresses": [], 00:34:29.021 "allow_any_host": true, 00:34:29.021 "hosts": [] 00:34:29.021 }, 00:34:29.021 { 00:34:29.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:29.021 "subtype": "NVMe", 00:34:29.021 "listen_addresses": [ 00:34:29.021 { 00:34:29.021 "trtype": "TCP", 00:34:29.021 "adrfam": "IPv4", 00:34:29.021 "traddr": "10.0.0.2", 00:34:29.021 "trsvcid": "4420" 00:34:29.021 } 00:34:29.021 ], 00:34:29.021 "allow_any_host": true, 00:34:29.021 "hosts": [], 00:34:29.021 "serial_number": "SPDK00000000000001", 00:34:29.021 "model_number": "SPDK bdev Controller", 00:34:29.021 "max_namespaces": 1, 00:34:29.021 "min_cntlid": 1, 00:34:29.021 "max_cntlid": 65519, 00:34:29.021 "namespaces": [ 00:34:29.021 { 00:34:29.021 "nsid": 1, 00:34:29.021 "bdev_name": "Nvme0n1", 00:34:29.021 "name": "Nvme0n1", 00:34:29.021 "nguid": "9D6D3601CF644646B6A54EBBC2285866", 00:34:29.021 "uuid": "9d6d3601-cf64-4646-b6a5-4ebbc2285866" 00:34:29.021 } 00:34:29.021 ] 00:34:29.021 } 00:34:29.021 ] 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:29.021 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:29.021 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:29.021 06:30:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.021 06:30:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.021 06:30:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:29.021 06:30:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:29.021 rmmod nvme_tcp 00:34:29.021 rmmod nvme_fabrics 00:34:29.021 rmmod nvme_keyring 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1899536 ']' 00:34:29.021 06:30:22 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1899536 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1899536 ']' 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1899536 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1899536 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1899536' 00:34:29.021 killing process with pid 1899536 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1899536 00:34:29.021 06:30:22 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1899536 00:34:30.393 06:30:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:30.393 06:30:23 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:30.393 06:30:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:30.393 06:30:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:30.393 06:30:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:30.393 06:30:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.393 06:30:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:30.393 06:30:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.926 06:30:25 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:32.926 00:34:32.926 real 0m17.741s 00:34:32.926 user 0m26.118s 00:34:32.926 sys 0m2.217s 00:34:32.926 06:30:25 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:32.926 06:30:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:32.926 ************************************ 00:34:32.926 END TEST nvmf_identify_passthru 00:34:32.926 ************************************ 00:34:32.926 06:30:25 -- common/autotest_common.sh@1142 -- # return 0 00:34:32.926 06:30:25 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:32.926 06:30:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:32.926 06:30:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:32.926 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:34:32.926 ************************************ 00:34:32.926 START TEST nvmf_dif 00:34:32.926 ************************************ 00:34:32.926 06:30:25 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:32.926 * Looking for test storage... 00:34:32.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:32.926 06:30:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.926 06:30:25 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.926 06:30:25 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.926 06:30:25 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.926 06:30:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.926 06:30:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.926 06:30:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.926 06:30:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:32.926 06:30:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:32.926 06:30:25 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:32.926 06:30:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:32.926 06:30:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:32.926 06:30:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:32.926 06:30:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:32.926 06:30:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.927 06:30:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:32.927 06:30:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:32.927 06:30:25 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:32.927 06:30:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:34.829 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:34.829 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:34.829 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:34.829 06:30:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:34.830 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:34.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:34:34.830 00:34:34.830 --- 10.0.0.2 ping statistics --- 00:34:34.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.830 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:34:34.830 00:34:34.830 --- 10.0.0.1 ping statistics --- 00:34:34.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.830 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:34.830 06:30:27 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:35.763 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:35.763 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:35.763 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:35.763 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:35.763 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:35.763 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:35.763 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:35.763 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:35.763 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:35.763 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:35.763 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:35.763 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:35.763 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:35.763 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:35.763 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:35.763 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:35.763 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:36.021 06:30:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:36.021 06:30:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:36.021 06:30:29 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:36.021 06:30:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1902790 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:36.021 06:30:29 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1902790 00:34:36.021 06:30:29 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1902790 ']' 00:34:36.021 06:30:29 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.021 06:30:29 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:36.021 06:30:29 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.021 06:30:29 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:36.022 06:30:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.022 [2024-07-23 06:30:29.185203] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:34:36.022 [2024-07-23 06:30:29.185301] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.022 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.022 [2024-07-23 06:30:29.222375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:36.022 [2024-07-23 06:30:29.249186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.022 [2024-07-23 06:30:29.332633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.022 [2024-07-23 06:30:29.332687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.022 [2024-07-23 06:30:29.332712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.022 [2024-07-23 06:30:29.332723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.022 [2024-07-23 06:30:29.332732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.022 [2024-07-23 06:30:29.332757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:34:36.281 06:30:29 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.281 06:30:29 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.281 06:30:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:36.281 06:30:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.281 [2024-07-23 06:30:29.465712] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.281 06:30:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.281 06:30:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.281 ************************************ 00:34:36.281 START TEST fio_dif_1_default 00:34:36.281 ************************************ 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.281 bdev_null0 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.281 [2024-07-23 06:30:29.522005] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:36.281 { 00:34:36.281 "params": { 00:34:36.281 "name": "Nvme$subsystem", 00:34:36.281 "trtype": "$TEST_TRANSPORT", 00:34:36.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.281 "adrfam": "ipv4", 00:34:36.281 "trsvcid": "$NVMF_PORT", 00:34:36.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.281 "hdgst": ${hdgst:-false}, 00:34:36.281 "ddgst": ${ddgst:-false} 00:34:36.281 }, 00:34:36.281 "method": "bdev_nvme_attach_controller" 00:34:36.281 } 00:34:36.281 EOF 00:34:36.281 )") 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:36.281 "params": { 00:34:36.281 "name": "Nvme0", 00:34:36.281 "trtype": "tcp", 00:34:36.281 "traddr": "10.0.0.2", 00:34:36.281 "adrfam": "ipv4", 00:34:36.281 "trsvcid": "4420", 00:34:36.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.281 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.281 "hdgst": false, 00:34:36.281 "ddgst": false 00:34:36.281 }, 00:34:36.281 "method": "bdev_nvme_attach_controller" 00:34:36.281 }' 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:36.281 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:36.282 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:36.282 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:36.282 06:30:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.540 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:36.540 fio-3.35 00:34:36.540 Starting 1 thread 00:34:36.540 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.757 00:34:48.757 filename0: (groupid=0, jobs=1): err= 0: pid=1903021: Tue Jul 23 06:30:40 2024 00:34:48.757 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10025msec) 00:34:48.757 slat (nsec): min=4449, max=37404, avg=10355.25, stdev=3962.73 00:34:48.757 clat (usec): min=40910, max=47636, avg=41737.69, stdev=582.68 00:34:48.757 lat (usec): min=40919, max=47649, avg=41748.04, stdev=582.72 00:34:48.757 clat percentiles (usec): 00:34:48.757 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:48.757 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:48.757 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:48.757 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:34:48.757 | 99.99th=[47449] 00:34:48.757 bw ( KiB/s): min= 352, max= 384, per=99.73%, avg=382.40, stdev= 7.16, samples=20 00:34:48.757 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:34:48.757 lat (msec) : 50=100.00% 00:34:48.757 cpu : usr=89.33%, sys=10.42%, ctx=14, majf=0, minf=247 00:34:48.757 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.757 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.757 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:48.757 00:34:48.757 Run status group 0 (all jobs): 00:34:48.757 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10025-10025msec 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.757 00:34:48.757 real 0m11.195s 00:34:48.757 user 0m10.087s 00:34:48.757 sys 0m1.332s 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 ************************************ 00:34:48.757 END TEST fio_dif_1_default 00:34:48.757 ************************************ 00:34:48.757 06:30:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:48.757 06:30:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:48.757 06:30:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:48.757 06:30:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 ************************************ 00:34:48.757 START TEST fio_dif_1_multi_subsystems 00:34:48.757 ************************************ 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 bdev_null0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 [2024-07-23 06:30:40.772882] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 bdev_null1 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.757 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:48.758 { 00:34:48.758 "params": { 00:34:48.758 "name": "Nvme$subsystem", 00:34:48.758 "trtype": "$TEST_TRANSPORT", 00:34:48.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.758 "adrfam": "ipv4", 00:34:48.758 "trsvcid": "$NVMF_PORT", 00:34:48.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.758 "hdgst": ${hdgst:-false}, 00:34:48.758 "ddgst": ${ddgst:-false} 00:34:48.758 }, 00:34:48.758 "method": "bdev_nvme_attach_controller" 00:34:48.758 } 00:34:48.758 EOF 00:34:48.758 )") 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:48.758 { 00:34:48.758 "params": { 00:34:48.758 "name": "Nvme$subsystem", 00:34:48.758 "trtype": "$TEST_TRANSPORT", 00:34:48.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.758 "adrfam": "ipv4", 00:34:48.758 "trsvcid": "$NVMF_PORT", 00:34:48.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.758 "hdgst": ${hdgst:-false}, 00:34:48.758 "ddgst": ${ddgst:-false} 00:34:48.758 }, 00:34:48.758 "method": "bdev_nvme_attach_controller" 00:34:48.758 } 00:34:48.758 EOF 00:34:48.758 )") 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:48.758 "params": { 00:34:48.758 "name": "Nvme0", 00:34:48.758 "trtype": "tcp", 00:34:48.758 "traddr": "10.0.0.2", 00:34:48.758 "adrfam": "ipv4", 00:34:48.758 "trsvcid": "4420", 00:34:48.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:48.758 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:48.758 "hdgst": false, 00:34:48.758 "ddgst": false 00:34:48.758 }, 00:34:48.758 "method": "bdev_nvme_attach_controller" 00:34:48.758 },{ 00:34:48.758 "params": { 00:34:48.758 "name": "Nvme1", 00:34:48.758 "trtype": "tcp", 00:34:48.758 "traddr": "10.0.0.2", 00:34:48.758 "adrfam": "ipv4", 00:34:48.758 "trsvcid": "4420", 00:34:48.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:48.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:48.758 "hdgst": false, 00:34:48.758 "ddgst": false 00:34:48.758 }, 00:34:48.758 "method": "bdev_nvme_attach_controller" 00:34:48.758 }' 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:48.758 06:30:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.758 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:48.758 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:48.758 fio-3.35 00:34:48.758 Starting 2 threads 00:34:48.758 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.764 00:34:58.764 filename0: (groupid=0, jobs=1): err= 0: pid=1904325: Tue Jul 23 06:30:51 2024 00:34:58.764 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10023msec) 00:34:58.764 slat (nsec): min=5865, max=62587, avg=12541.32, stdev=7838.10 00:34:58.764 clat (usec): min=40907, max=47037, avg=42069.55, stdev=571.94 00:34:58.764 lat (usec): min=40914, max=47082, avg=42082.09, stdev=572.57 00:34:58.764 clat percentiles (usec): 00:34:58.764 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:34:58.764 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:58.764 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:34:58.764 | 99.00th=[43254], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:34:58.764 | 99.99th=[46924] 00:34:58.764 bw ( KiB/s): min= 352, max= 384, per=33.51%, avg=379.20, stdev=11.72, samples=20 00:34:58.764 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:34:58.764 lat (msec) : 50=100.00% 00:34:58.764 cpu : usr=97.03%, sys=2.69%, ctx=16, majf=0, minf=117 00:34:58.764 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.764 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.764 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:58.764 filename1: (groupid=0, jobs=1): err= 0: pid=1904327: Tue Jul 23 06:30:51 2024 00:34:58.764 read: IOPS=187, BW=751KiB/s (769kB/s)(7536KiB/10030msec) 00:34:58.764 slat (nsec): min=4141, max=52104, avg=11481.93, stdev=6087.43 00:34:58.764 clat (usec): min=789, max=47616, avg=21257.99, stdev=20309.91 00:34:58.764 lat (usec): min=797, max=47635, avg=21269.47, stdev=20308.71 00:34:58.764 clat percentiles (usec): 00:34:58.764 | 1.00th=[ 840], 5.00th=[ 865], 10.00th=[ 873], 20.00th=[ 898], 00:34:58.764 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:34:58.764 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:58.764 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:34:58.764 | 99.99th=[47449] 00:34:58.764 bw ( KiB/s): min= 704, max= 768, per=66.49%, avg=752.00, stdev=28.43, samples=20 00:34:58.764 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:34:58.764 lat (usec) : 1000=48.14% 00:34:58.764 lat (msec) : 2=1.75%, 50=50.11% 00:34:58.764 cpu : usr=97.38%, sys=2.32%, ctx=14, majf=0, minf=209 00:34:58.764 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.764 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.764 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:58.764 00:34:58.764 Run status group 0 (all jobs): 00:34:58.764 READ: bw=1131KiB/s (1158kB/s), 380KiB/s-751KiB/s (389kB/s-769kB/s), io=11.1MiB (11.6MB), run=10023-10030msec 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.764 06:30:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.764 06:30:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.764 00:34:58.764 real 0m11.263s 00:34:58.764 user 0m20.742s 00:34:58.764 sys 0m0.811s 00:34:58.764 06:30:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:58.764 06:30:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.764 ************************************ 00:34:58.764 END TEST fio_dif_1_multi_subsystems 00:34:58.764 ************************************ 00:34:58.764 06:30:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:58.764 06:30:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:58.764 06:30:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:58.764 06:30:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.764 06:30:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.764 ************************************ 00:34:58.764 START TEST fio_dif_rand_params 00:34:58.764 ************************************ 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.765 bdev_null0 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.765 [2024-07-23 06:30:52.090137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.765 { 00:34:58.765 "params": { 00:34:58.765 "name": "Nvme$subsystem", 00:34:58.765 "trtype": "$TEST_TRANSPORT", 00:34:58.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.765 "adrfam": "ipv4", 00:34:58.765 "trsvcid": "$NVMF_PORT", 00:34:58.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.765 "hdgst": ${hdgst:-false}, 00:34:58.765 "ddgst": ${ddgst:-false} 00:34:58.765 }, 00:34:58.765 "method": "bdev_nvme_attach_controller" 00:34:58.765 } 00:34:58.765 EOF 00:34:58.765 )") 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:58.765 06:30:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:58.765 "params": { 00:34:58.765 "name": "Nvme0", 00:34:58.765 "trtype": "tcp", 00:34:58.765 "traddr": "10.0.0.2", 00:34:58.765 "adrfam": "ipv4", 00:34:58.765 "trsvcid": "4420", 00:34:58.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.765 "hdgst": false, 00:34:58.765 "ddgst": false 00:34:58.765 }, 00:34:58.765 "method": "bdev_nvme_attach_controller" 00:34:58.765 }' 00:34:59.023 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:59.024 06:30:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.024 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:59.024 ... 00:34:59.024 fio-3.35 00:34:59.024 Starting 3 threads 00:34:59.282 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.841 00:35:05.842 filename0: (groupid=0, jobs=1): err= 0: pid=1905706: Tue Jul 23 06:30:57 2024 00:35:05.842 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(120MiB/5047msec) 00:35:05.842 slat (nsec): min=3806, max=83043, avg=13709.61, stdev=6001.92 00:35:05.842 clat (usec): min=4764, max=57639, avg=15722.07, stdev=14030.37 00:35:05.842 lat (usec): min=4777, max=57655, avg=15735.78, stdev=14030.54 00:35:05.842 clat percentiles (usec): 00:35:05.842 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 7046], 20.00th=[ 7832], 00:35:05.842 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[12125], 00:35:05.842 | 70.00th=[13566], 80.00th=[14746], 90.00th=[50594], 95.00th=[52691], 00:35:05.842 | 99.00th=[54264], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:35:05.842 | 99.99th=[57410] 00:35:05.842 bw ( KiB/s): min=15104, max=33024, per=35.56%, avg=24478.20, stdev=5325.25, samples=10 00:35:05.842 iops : min= 118, max= 258, avg=191.20, stdev=41.61, samples=10 00:35:05.842 lat (msec) : 10=42.44%, 20=45.05%, 50=1.46%, 100=11.05% 00:35:05.842 cpu : usr=92.65%, sys=6.86%, ctx=7, majf=0, minf=171 00:35:05.842 IO depths : 1=4.1%, 2=95.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.842 issued rwts: total=959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.842 filename0: (groupid=0, jobs=1): err= 0: pid=1905707: Tue Jul 23 06:30:57 2024 00:35:05.842 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(128MiB/5048msec) 00:35:05.842 slat (nsec): min=4285, max=56372, avg=17985.62, stdev=7953.01 00:35:05.842 clat (usec): min=4991, max=95019, avg=14703.59, stdev=13216.53 00:35:05.842 lat (usec): min=5005, max=95049, avg=14721.57, stdev=13216.66 00:35:05.842 clat percentiles (usec): 00:35:05.842 | 1.00th=[ 6587], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 8225], 00:35:05.842 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[11338], 00:35:05.842 | 70.00th=[12256], 80.00th=[13566], 90.00th=[49546], 95.00th=[51643], 00:35:05.842 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56361], 99.95th=[94897], 00:35:05.842 | 99.99th=[94897] 00:35:05.842 bw ( KiB/s): min=19200, max=35328, per=38.00%, avg=26163.20, stdev=4758.51, samples=10 00:35:05.842 iops : min= 150, max= 276, avg=204.40, stdev=37.18, samples=10 00:35:05.842 lat (msec) : 10=47.80%, 20=41.56%, 50=1.27%, 100=9.37% 00:35:05.842 cpu : usr=86.72%, sys=10.05%, ctx=633, majf=0, minf=101 00:35:05.842 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.842 issued rwts: total=1025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.842 filename0: (groupid=0, jobs=1): err= 0: pid=1905708: Tue Jul 23 06:30:57 2024 00:35:05.842 read: IOPS=146, BW=18.3MiB/s (19.1MB/s)(91.4MiB/5005msec) 00:35:05.842 slat (nsec): min=4502, max=48702, avg=14348.68, stdev=4876.87 00:35:05.842 clat (usec): min=6201, max=93278, avg=20519.20, stdev=16806.01 00:35:05.842 lat (usec): min=6212, max=93296, avg=20533.55, stdev=16806.18 00:35:05.842 clat percentiles (usec): 00:35:05.842 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[10159], 00:35:05.842 | 30.00th=[11469], 40.00th=[12780], 50.00th=[13829], 60.00th=[15008], 00:35:05.842 | 70.00th=[16319], 80.00th=[19268], 90.00th=[53740], 95.00th=[55313], 00:35:05.842 | 99.00th=[58459], 99.50th=[58983], 99.90th=[92799], 99.95th=[92799], 00:35:05.842 | 99.99th=[92799] 00:35:05.842 bw ( KiB/s): min=11776, max=23296, per=27.07%, avg=18636.80, stdev=4183.60, samples=10 00:35:05.842 iops : min= 92, max= 182, avg=145.60, stdev=32.68, samples=10 00:35:05.842 lat (msec) : 10=19.43%, 20=60.74%, 50=1.64%, 100=18.19% 00:35:05.842 cpu : usr=93.43%, sys=6.15%, ctx=6, majf=0, minf=110 00:35:05.842 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.842 issued rwts: total=731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.842 00:35:05.842 Run status group 0 (all jobs): 00:35:05.842 READ: bw=67.2MiB/s (70.5MB/s), 18.3MiB/s-25.4MiB/s (19.1MB/s-26.6MB/s), io=339MiB (356MB), run=5005-5048msec 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 bdev_null0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 [2024-07-23 06:30:58.198600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 bdev_null1 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:05.842 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.843 bdev_null2 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:05.843 { 00:35:05.843 "params": { 00:35:05.843 "name": "Nvme$subsystem", 00:35:05.843 "trtype": "$TEST_TRANSPORT", 00:35:05.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.843 "adrfam": "ipv4", 00:35:05.843 "trsvcid": "$NVMF_PORT", 00:35:05.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.843 "hdgst": ${hdgst:-false}, 00:35:05.843 "ddgst": ${ddgst:-false} 00:35:05.843 }, 00:35:05.843 "method": "bdev_nvme_attach_controller" 00:35:05.843 } 00:35:05.843 EOF 00:35:05.843 )") 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:05.843 { 00:35:05.843 "params": { 00:35:05.843 "name": "Nvme$subsystem", 00:35:05.843 "trtype": "$TEST_TRANSPORT", 00:35:05.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.843 "adrfam": "ipv4", 00:35:05.843 "trsvcid": "$NVMF_PORT", 00:35:05.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.843 "hdgst": ${hdgst:-false}, 00:35:05.843 "ddgst": ${ddgst:-false} 00:35:05.843 }, 00:35:05.843 "method": "bdev_nvme_attach_controller" 00:35:05.843 } 00:35:05.843 EOF 00:35:05.843 )") 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:05.843 { 00:35:05.843 "params": { 00:35:05.843 "name": "Nvme$subsystem", 00:35:05.843 "trtype": "$TEST_TRANSPORT", 00:35:05.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.843 "adrfam": "ipv4", 00:35:05.843 "trsvcid": "$NVMF_PORT", 00:35:05.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.843 "hdgst": ${hdgst:-false}, 00:35:05.843 "ddgst": ${ddgst:-false} 00:35:05.843 }, 00:35:05.843 "method": "bdev_nvme_attach_controller" 00:35:05.843 } 00:35:05.843 EOF 00:35:05.843 )") 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:05.843 "params": { 00:35:05.843 "name": "Nvme0", 00:35:05.843 "trtype": "tcp", 00:35:05.843 "traddr": "10.0.0.2", 00:35:05.843 "adrfam": "ipv4", 00:35:05.843 "trsvcid": "4420", 00:35:05.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.843 "hdgst": false, 00:35:05.843 "ddgst": false 00:35:05.843 }, 00:35:05.843 "method": "bdev_nvme_attach_controller" 00:35:05.843 },{ 00:35:05.843 "params": { 00:35:05.843 "name": "Nvme1", 00:35:05.843 "trtype": "tcp", 00:35:05.843 "traddr": "10.0.0.2", 00:35:05.843 "adrfam": "ipv4", 00:35:05.843 "trsvcid": "4420", 00:35:05.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.843 "hdgst": false, 00:35:05.843 "ddgst": false 00:35:05.843 }, 00:35:05.843 "method": "bdev_nvme_attach_controller" 00:35:05.843 },{ 00:35:05.843 "params": { 00:35:05.843 "name": "Nvme2", 00:35:05.843 "trtype": "tcp", 00:35:05.843 "traddr": "10.0.0.2", 00:35:05.843 "adrfam": "ipv4", 00:35:05.843 "trsvcid": "4420", 00:35:05.843 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:05.843 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:05.843 "hdgst": false, 00:35:05.843 "ddgst": false 00:35:05.843 }, 00:35:05.843 "method": "bdev_nvme_attach_controller" 00:35:05.843 }' 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:05.843 06:30:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.843 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:05.843 ... 00:35:05.843 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:05.843 ... 00:35:05.843 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:05.843 ... 00:35:05.843 fio-3.35 00:35:05.843 Starting 24 threads 00:35:05.843 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.059 00:35:18.059 filename0: (groupid=0, jobs=1): err= 0: pid=1906561: Tue Jul 23 06:31:09 2024 00:35:18.059 read: IOPS=468, BW=1872KiB/s (1917kB/s)(18.3MiB/10031msec) 00:35:18.059 slat (usec): min=7, max=197, avg=27.81, stdev=27.37 00:35:18.059 clat (usec): min=7702, max=57663, avg=33857.36, stdev=4888.00 00:35:18.059 lat (usec): min=7711, max=57675, avg=33885.17, stdev=4883.98 00:35:18.059 clat percentiles (usec): 00:35:18.059 | 1.00th=[16450], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:35:18.059 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:18.059 | 70.00th=[33162], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:35:18.059 | 99.00th=[43254], 99.50th=[43779], 99.90th=[53216], 99.95th=[53216], 00:35:18.059 | 99.99th=[57410] 00:35:18.059 bw ( KiB/s): min= 1536, max= 2216, per=4.24%, avg=1876.95, stdev=185.23, samples=20 00:35:18.059 iops : min= 384, max= 554, avg=469.20, stdev=46.27, samples=20 00:35:18.059 lat (msec) : 10=0.79%, 20=1.00%, 50=97.96%, 100=0.26% 00:35:18.059 cpu : usr=97.91%, sys=1.67%, ctx=18, majf=0, minf=64 00:35:18.059 IO depths : 1=4.8%, 2=10.9%, 4=24.3%, 8=52.3%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:18.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.059 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.059 issued rwts: total=4695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.059 filename0: (groupid=0, jobs=1): err= 0: pid=1906562: Tue Jul 23 06:31:09 2024 00:35:18.059 read: IOPS=462, BW=1852KiB/s (1896kB/s)(18.1MiB/10023msec) 00:35:18.059 slat (usec): min=8, max=140, avg=20.39, stdev=18.21 00:35:18.059 clat (usec): min=22145, max=58817, avg=34388.22, stdev=3919.95 00:35:18.059 lat (usec): min=22154, max=58826, avg=34408.61, stdev=3916.70 00:35:18.060 clat percentiles (usec): 00:35:18.060 | 1.00th=[28181], 5.00th=[31327], 10.00th=[32113], 20.00th=[32375], 00:35:18.060 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:35:18.060 | 70.00th=[33424], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:35:18.060 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:35:18.060 | 99.99th=[58983] 00:35:18.060 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1845.89, stdev=176.98, samples=19 00:35:18.060 iops : min= 352, max= 512, avg=461.47, stdev=44.25, samples=19 00:35:18.060 lat (msec) : 50=99.96%, 100=0.04% 00:35:18.060 cpu : usr=97.52%, sys=2.02%, ctx=26, majf=0, minf=48 00:35:18.060 IO depths : 1=4.2%, 2=10.3%, 4=24.5%, 8=52.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:35:18.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.060 filename0: (groupid=0, jobs=1): err= 0: pid=1906563: Tue Jul 23 06:31:09 2024 00:35:18.060 read: IOPS=464, BW=1860KiB/s (1905kB/s)(18.2MiB/10005msec) 00:35:18.060 slat (usec): min=8, max=152, avg=47.28, stdev=22.18 00:35:18.060 clat (usec): min=7784, max=70654, avg=33992.02, stdev=4689.96 00:35:18.060 lat (usec): min=7801, max=70686, avg=34039.29, stdev=4687.37 00:35:18.060 clat percentiles (usec): 00:35:18.060 | 1.00th=[20579], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:35:18.060 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:35:18.060 | 70.00th=[33162], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:35:18.060 | 99.00th=[44303], 99.50th=[45351], 99.90th=[60031], 99.95th=[60031], 00:35:18.060 | 99.99th=[70779] 00:35:18.060 bw ( KiB/s): min= 1408, max= 2192, per=4.18%, avg=1850.95, stdev=191.78, samples=19 00:35:18.060 iops : min= 352, max= 548, avg=462.74, stdev=47.95, samples=19 00:35:18.060 lat (msec) : 10=0.04%, 20=0.86%, 50=98.75%, 100=0.34% 00:35:18.060 cpu : usr=98.11%, sys=1.49%, ctx=15, majf=0, minf=43 00:35:18.060 IO depths : 1=4.0%, 2=9.8%, 4=23.3%, 8=54.1%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:18.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 issued rwts: total=4652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.060 filename0: (groupid=0, jobs=1): err= 0: pid=1906564: Tue Jul 23 06:31:09 2024 00:35:18.060 read: IOPS=461, BW=1848KiB/s (1892kB/s)(18.1MiB/10005msec) 00:35:18.060 slat (usec): min=8, max=143, avg=42.41, stdev=21.79 00:35:18.060 clat (usec): min=13282, max=59768, avg=34293.06, stdev=4790.09 00:35:18.060 lat (usec): min=13310, max=59834, avg=34335.47, stdev=4785.90 00:35:18.060 clat percentiles (usec): 00:35:18.060 | 1.00th=[22152], 5.00th=[31327], 10.00th=[31851], 20.00th=[32113], 00:35:18.060 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:18.060 | 70.00th=[33162], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:35:18.060 | 99.00th=[48497], 99.50th=[56361], 99.90th=[59507], 99.95th=[59507], 00:35:18.060 | 99.99th=[59507] 00:35:18.060 bw ( KiB/s): min= 1408, max= 2032, per=4.15%, avg=1837.47, stdev=170.19, samples=19 00:35:18.060 iops : min= 352, max= 508, avg=459.37, stdev=42.55, samples=19 00:35:18.060 lat (msec) : 20=1.00%, 50=98.29%, 100=0.71% 00:35:18.060 cpu : usr=96.71%, sys=2.19%, ctx=38, majf=0, minf=36 00:35:18.060 IO depths : 1=1.5%, 2=7.1%, 4=22.3%, 8=57.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:35:18.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 issued rwts: total=4622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.060 filename0: (groupid=0, jobs=1): err= 0: pid=1906565: Tue Jul 23 06:31:09 2024 00:35:18.060 read: IOPS=462, BW=1849KiB/s (1893kB/s)(18.1MiB/10005msec) 00:35:18.060 slat (usec): min=9, max=152, avg=56.12, stdev=20.49 00:35:18.060 clat (usec): min=13112, max=59297, avg=34088.81, stdev=4154.87 00:35:18.060 lat (usec): min=13149, max=59336, avg=34144.93, stdev=4149.90 00:35:18.060 clat percentiles (usec): 00:35:18.060 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:18.060 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:18.060 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:35:18.060 | 99.00th=[43779], 99.50th=[44303], 99.90th=[58983], 99.95th=[58983], 00:35:18.060 | 99.99th=[59507] 00:35:18.060 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1839.32, stdev=172.47, samples=19 00:35:18.060 iops : min= 352, max= 512, avg=459.79, stdev=43.13, samples=19 00:35:18.060 lat (msec) : 20=0.35%, 50=99.31%, 100=0.35% 00:35:18.060 cpu : usr=93.22%, sys=3.50%, ctx=94, majf=0, minf=60 00:35:18.060 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.060 filename0: (groupid=0, jobs=1): err= 0: pid=1906566: Tue Jul 23 06:31:09 2024 00:35:18.060 read: IOPS=463, BW=1855KiB/s (1899kB/s)(18.1MiB/10007msec) 00:35:18.060 slat (usec): min=7, max=130, avg=23.43, stdev=15.68 00:35:18.060 clat (usec): min=14424, max=57509, avg=34317.42, stdev=3786.56 00:35:18.060 lat (usec): min=14458, max=57518, avg=34340.85, stdev=3788.76 00:35:18.060 clat percentiles (usec): 00:35:18.060 | 1.00th=[30016], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:35:18.060 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:35:18.060 | 70.00th=[33424], 80.00th=[33817], 90.00th=[42730], 95.00th=[42730], 00:35:18.060 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[51119], 00:35:18.060 | 99.99th=[57410] 00:35:18.060 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1845.68, stdev=177.54, samples=19 00:35:18.060 iops : min= 352, max= 512, avg=461.42, stdev=44.38, samples=19 00:35:18.060 lat (msec) : 20=0.17%, 50=99.74%, 100=0.09% 00:35:18.060 cpu : usr=97.81%, sys=1.77%, ctx=15, majf=0, minf=48 00:35:18.060 IO depths : 1=3.6%, 2=9.8%, 4=24.9%, 8=52.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:35:18.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.060 filename0: (groupid=0, jobs=1): err= 0: pid=1906567: Tue Jul 23 06:31:09 2024 00:35:18.060 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10007msec) 00:35:18.060 slat (usec): min=9, max=131, avg=47.81, stdev=27.20 00:35:18.060 clat (usec): min=24065, max=51592, avg=34268.17, stdev=3845.33 00:35:18.060 lat (usec): min=24084, max=51648, avg=34315.97, stdev=3837.61 00:35:18.060 clat percentiles (usec): 00:35:18.060 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:18.060 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:18.060 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42730], 95.00th=[42730], 00:35:18.060 | 99.00th=[43779], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:35:18.060 | 99.99th=[51643] 00:35:18.060 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1838.95, stdev=181.98, samples=19 00:35:18.060 iops : min= 352, max= 512, avg=459.74, stdev=45.49, samples=19 00:35:18.060 lat (msec) : 50=99.96%, 100=0.04% 00:35:18.060 cpu : usr=97.43%, sys=1.72%, ctx=155, majf=0, minf=47 00:35:18.060 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:18.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.060 filename0: (groupid=0, jobs=1): err= 0: pid=1906568: Tue Jul 23 06:31:09 2024 00:35:18.060 read: IOPS=465, BW=1862KiB/s (1906kB/s)(18.2MiB/10013msec) 00:35:18.060 slat (usec): min=8, max=125, avg=51.82, stdev=21.26 00:35:18.060 clat (usec): min=13173, max=67492, avg=33981.81, stdev=5171.18 00:35:18.060 lat (usec): min=13206, max=67525, avg=34033.63, stdev=5167.82 00:35:18.060 clat percentiles (usec): 00:35:18.060 | 1.00th=[19268], 5.00th=[29754], 10.00th=[31589], 20.00th=[32113], 00:35:18.060 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:35:18.060 | 70.00th=[33162], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:35:18.060 | 99.00th=[47973], 99.50th=[51119], 99.90th=[67634], 99.95th=[67634], 00:35:18.060 | 99.99th=[67634] 00:35:18.060 bw ( KiB/s): min= 1408, max= 2064, per=4.18%, avg=1853.47, stdev=188.81, samples=19 00:35:18.060 iops : min= 352, max= 516, avg=463.37, stdev=47.20, samples=19 00:35:18.060 lat (msec) : 20=1.31%, 50=98.00%, 100=0.69% 00:35:18.060 cpu : usr=97.42%, sys=1.83%, ctx=138, majf=0, minf=47 00:35:18.060 IO depths : 1=2.8%, 2=8.1%, 4=21.4%, 8=57.5%, 16=10.3%, 32=0.0%, >=64=0.0% 00:35:18.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 complete : 0=0.0%, 4=93.4%, 8=1.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.060 issued rwts: total=4660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.060 filename1: (groupid=0, jobs=1): err= 0: pid=1906569: Tue Jul 23 06:31:09 2024 00:35:18.060 read: IOPS=461, BW=1845KiB/s (1889kB/s)(18.1MiB/10061msec) 00:35:18.060 slat (usec): min=5, max=138, avg=34.37, stdev=16.73 00:35:18.060 clat (usec): min=16447, max=64425, avg=34295.26, stdev=3916.03 00:35:18.060 lat (usec): min=16462, max=64445, avg=34329.63, stdev=3913.77 00:35:18.060 clat percentiles (usec): 00:35:18.060 | 1.00th=[29492], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:35:18.060 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:18.060 | 70.00th=[33162], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:35:18.060 | 99.00th=[43254], 99.50th=[43779], 99.90th=[51643], 99.95th=[51643], 00:35:18.060 | 99.99th=[64226] 00:35:18.060 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1845.89, stdev=177.55, samples=19 00:35:18.060 iops : min= 352, max= 512, avg=461.47, stdev=44.39, samples=19 00:35:18.060 lat (msec) : 20=0.13%, 50=99.74%, 100=0.13% 00:35:18.060 cpu : usr=91.90%, sys=3.98%, ctx=943, majf=0, minf=39 00:35:18.061 IO depths : 1=5.1%, 2=11.2%, 4=24.7%, 8=51.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:35:18.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.061 filename1: (groupid=0, jobs=1): err= 0: pid=1906570: Tue Jul 23 06:31:09 2024 00:35:18.061 read: IOPS=466, BW=1867KiB/s (1911kB/s)(18.3MiB/10025msec) 00:35:18.061 slat (usec): min=6, max=496, avg=34.19, stdev=28.63 00:35:18.061 clat (usec): min=8001, max=60490, avg=34009.54, stdev=4480.54 00:35:18.061 lat (usec): min=8014, max=60499, avg=34043.74, stdev=4478.18 00:35:18.061 clat percentiles (usec): 00:35:18.061 | 1.00th=[19268], 5.00th=[31327], 10.00th=[31851], 20.00th=[32375], 00:35:18.061 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:18.061 | 70.00th=[33162], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:35:18.061 | 99.00th=[43254], 99.50th=[43254], 99.90th=[47973], 99.95th=[57934], 00:35:18.061 | 99.99th=[60556] 00:35:18.061 bw ( KiB/s): min= 1408, max= 2048, per=4.20%, avg=1861.89, stdev=187.90, samples=19 00:35:18.061 iops : min= 352, max= 512, avg=465.47, stdev=46.97, samples=19 00:35:18.061 lat (msec) : 10=0.24%, 20=0.92%, 50=98.76%, 100=0.09% 00:35:18.061 cpu : usr=93.84%, sys=3.11%, ctx=81, majf=0, minf=43 00:35:18.061 IO depths : 1=3.9%, 2=9.9%, 4=24.2%, 8=53.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:35:18.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 issued rwts: total=4678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.061 filename1: (groupid=0, jobs=1): err= 0: pid=1906571: Tue Jul 23 06:31:09 2024 00:35:18.061 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10007msec) 00:35:18.061 slat (usec): min=8, max=154, avg=38.71, stdev=21.01 00:35:18.061 clat (usec): min=18735, max=49281, avg=34241.54, stdev=4157.72 00:35:18.061 lat (usec): min=18743, max=49311, avg=34280.25, stdev=4157.53 00:35:18.061 clat percentiles (usec): 00:35:18.061 | 1.00th=[24249], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:35:18.061 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:18.061 | 70.00th=[33162], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:35:18.061 | 99.00th=[43779], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:35:18.061 | 99.99th=[49021] 00:35:18.061 bw ( KiB/s): min= 1408, max= 2048, per=4.16%, avg=1844.00, stdev=185.65, samples=19 00:35:18.061 iops : min= 352, max= 512, avg=461.00, stdev=46.41, samples=19 00:35:18.061 lat (msec) : 20=0.13%, 50=99.87% 00:35:18.061 cpu : usr=94.30%, sys=3.21%, ctx=99, majf=0, minf=58 00:35:18.061 IO depths : 1=4.9%, 2=10.8%, 4=23.6%, 8=53.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:18.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 issued rwts: total=4636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.061 filename1: (groupid=0, jobs=1): err= 0: pid=1906572: Tue Jul 23 06:31:09 2024 00:35:18.061 read: IOPS=467, BW=1869KiB/s (1913kB/s)(18.3MiB/10029msec) 00:35:18.061 slat (usec): min=7, max=171, avg=30.53, stdev=33.27 00:35:18.061 clat (usec): min=7755, max=43773, avg=33978.40, stdev=4488.93 00:35:18.061 lat (usec): min=7776, max=43792, avg=34008.93, stdev=4482.78 00:35:18.061 clat percentiles (usec): 00:35:18.061 | 1.00th=[19268], 5.00th=[29492], 10.00th=[31327], 20.00th=[32113], 00:35:18.061 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:18.061 | 70.00th=[33424], 80.00th=[35914], 90.00th=[42730], 95.00th=[42730], 00:35:18.061 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:35:18.061 | 99.99th=[43779] 00:35:18.061 bw ( KiB/s): min= 1408, max= 2096, per=4.22%, avg=1867.35, stdev=190.65, samples=20 00:35:18.061 iops : min= 352, max= 524, avg=466.80, stdev=47.63, samples=20 00:35:18.061 lat (msec) : 10=0.15%, 20=0.94%, 50=98.91% 00:35:18.061 cpu : usr=97.96%, sys=1.64%, ctx=17, majf=0, minf=60 00:35:18.061 IO depths : 1=5.5%, 2=11.6%, 4=24.3%, 8=51.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:18.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 issued rwts: total=4685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.061 filename1: (groupid=0, jobs=1): err= 0: pid=1906573: Tue Jul 23 06:31:09 2024 00:35:18.061 read: IOPS=462, BW=1849KiB/s (1893kB/s)(18.1MiB/10003msec) 00:35:18.061 slat (usec): min=5, max=141, avg=50.46, stdev=16.52 00:35:18.061 clat (usec): min=20607, max=52473, avg=34177.81, stdev=3754.49 00:35:18.061 lat (usec): min=20618, max=52529, avg=34228.27, stdev=3751.56 00:35:18.061 clat percentiles (usec): 00:35:18.061 | 1.00th=[30802], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:35:18.061 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:35:18.061 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:35:18.061 | 99.00th=[43254], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:35:18.061 | 99.99th=[52691] 00:35:18.061 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1839.16, stdev=182.07, samples=19 00:35:18.061 iops : min= 352, max= 512, avg=459.79, stdev=45.52, samples=19 00:35:18.061 lat (msec) : 50=99.96%, 100=0.04% 00:35:18.061 cpu : usr=95.75%, sys=2.62%, ctx=193, majf=0, minf=60 00:35:18.061 IO depths : 1=5.2%, 2=11.5%, 4=24.9%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:18.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.061 filename1: (groupid=0, jobs=1): err= 0: pid=1906574: Tue Jul 23 06:31:09 2024 00:35:18.061 read: IOPS=465, BW=1862KiB/s (1907kB/s)(18.2MiB/10005msec) 00:35:18.061 slat (usec): min=4, max=272, avg=52.33, stdev=27.81 00:35:18.061 clat (usec): min=18430, max=55535, avg=33907.11, stdev=4526.48 00:35:18.061 lat (usec): min=18483, max=55593, avg=33959.44, stdev=4523.71 00:35:18.061 clat percentiles (usec): 00:35:18.061 | 1.00th=[20317], 5.00th=[30278], 10.00th=[31589], 20.00th=[32113], 00:35:18.061 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:35:18.061 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:35:18.061 | 99.00th=[44303], 99.50th=[46924], 99.90th=[55313], 99.95th=[55313], 00:35:18.061 | 99.99th=[55313] 00:35:18.061 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1853.47, stdev=200.00, samples=19 00:35:18.061 iops : min= 352, max= 512, avg=463.37, stdev=50.00, samples=19 00:35:18.061 lat (msec) : 20=0.67%, 50=98.99%, 100=0.34% 00:35:18.061 cpu : usr=97.70%, sys=1.70%, ctx=56, majf=0, minf=58 00:35:18.061 IO depths : 1=4.3%, 2=9.9%, 4=22.7%, 8=54.6%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:18.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 issued rwts: total=4658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.061 filename1: (groupid=0, jobs=1): err= 0: pid=1906575: Tue Jul 23 06:31:09 2024 00:35:18.061 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:35:18.061 slat (usec): min=7, max=157, avg=56.07, stdev=24.57 00:35:18.061 clat (usec): min=29904, max=49526, avg=34172.69, stdev=3813.28 00:35:18.061 lat (usec): min=29988, max=49540, avg=34228.76, stdev=3805.09 00:35:18.061 clat percentiles (usec): 00:35:18.061 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:18.061 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:35:18.061 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42730], 95.00th=[42730], 00:35:18.061 | 99.00th=[43779], 99.50th=[44303], 99.90th=[49546], 99.95th=[49546], 00:35:18.061 | 99.99th=[49546] 00:35:18.061 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1838.95, stdev=181.98, samples=19 00:35:18.061 iops : min= 352, max= 512, avg=459.74, stdev=45.49, samples=19 00:35:18.061 lat (msec) : 50=100.00% 00:35:18.061 cpu : usr=97.34%, sys=1.90%, ctx=91, majf=0, minf=52 00:35:18.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.061 filename1: (groupid=0, jobs=1): err= 0: pid=1906576: Tue Jul 23 06:31:09 2024 00:35:18.061 read: IOPS=461, BW=1845KiB/s (1889kB/s)(18.0MiB/10005msec) 00:35:18.061 slat (usec): min=8, max=181, avg=48.27, stdev=26.05 00:35:18.061 clat (usec): min=8344, max=65453, avg=34277.03, stdev=5032.03 00:35:18.061 lat (usec): min=8429, max=65482, avg=34325.30, stdev=5025.06 00:35:18.061 clat percentiles (usec): 00:35:18.061 | 1.00th=[22152], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:35:18.061 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:35:18.061 | 70.00th=[33162], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:35:18.061 | 99.00th=[51119], 99.50th=[59507], 99.90th=[65274], 99.95th=[65274], 00:35:18.061 | 99.99th=[65274] 00:35:18.061 bw ( KiB/s): min= 1408, max= 2048, per=4.14%, avg=1834.95, stdev=182.74, samples=19 00:35:18.061 iops : min= 352, max= 512, avg=458.74, stdev=45.69, samples=19 00:35:18.061 lat (msec) : 10=0.04%, 20=0.48%, 50=98.44%, 100=1.04% 00:35:18.061 cpu : usr=95.38%, sys=2.62%, ctx=82, majf=0, minf=62 00:35:18.061 IO depths : 1=3.2%, 2=8.8%, 4=23.1%, 8=55.5%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:18.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.061 issued rwts: total=4614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.061 filename2: (groupid=0, jobs=1): err= 0: pid=1906577: Tue Jul 23 06:31:09 2024 00:35:18.061 read: IOPS=463, BW=1856KiB/s (1901kB/s)(18.1MiB/10013msec) 00:35:18.061 slat (usec): min=5, max=890, avg=41.04, stdev=27.28 00:35:18.061 clat (usec): min=7134, max=78209, avg=34128.32, stdev=4940.43 00:35:18.061 lat (usec): min=7155, max=78226, avg=34169.37, stdev=4939.86 00:35:18.061 clat percentiles (usec): 00:35:18.062 | 1.00th=[20841], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:35:18.062 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:18.062 | 70.00th=[33162], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:35:18.062 | 99.00th=[44303], 99.50th=[51119], 99.90th=[66847], 99.95th=[66847], 00:35:18.062 | 99.99th=[78119] 00:35:18.062 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1848.42, stdev=178.68, samples=19 00:35:18.062 iops : min= 352, max= 512, avg=462.11, stdev=44.67, samples=19 00:35:18.062 lat (msec) : 10=0.09%, 20=0.86%, 50=98.41%, 100=0.65% 00:35:18.062 cpu : usr=91.58%, sys=4.04%, ctx=154, majf=0, minf=44 00:35:18.062 IO depths : 1=3.3%, 2=9.0%, 4=23.2%, 8=55.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 issued rwts: total=4646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.062 filename2: (groupid=0, jobs=1): err= 0: pid=1906578: Tue Jul 23 06:31:09 2024 00:35:18.062 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10008msec) 00:35:18.062 slat (usec): min=7, max=172, avg=41.66, stdev=29.00 00:35:18.062 clat (usec): min=10133, max=61795, avg=34491.77, stdev=5321.34 00:35:18.062 lat (usec): min=10156, max=61870, avg=34533.43, stdev=5326.55 00:35:18.062 clat percentiles (usec): 00:35:18.062 | 1.00th=[19006], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:35:18.062 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:18.062 | 70.00th=[33424], 80.00th=[38536], 90.00th=[42206], 95.00th=[42730], 00:35:18.062 | 99.00th=[54264], 99.50th=[55313], 99.90th=[60556], 99.95th=[61604], 00:35:18.062 | 99.99th=[61604] 00:35:18.062 bw ( KiB/s): min= 1408, max= 2155, per=4.14%, avg=1832.16, stdev=194.87, samples=19 00:35:18.062 iops : min= 352, max= 538, avg=458.00, stdev=48.65, samples=19 00:35:18.062 lat (msec) : 20=1.96%, 50=96.52%, 100=1.52% 00:35:18.062 cpu : usr=98.21%, sys=1.36%, ctx=17, majf=0, minf=48 00:35:18.062 IO depths : 1=4.3%, 2=8.6%, 4=18.9%, 8=59.8%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.062 filename2: (groupid=0, jobs=1): err= 0: pid=1906579: Tue Jul 23 06:31:09 2024 00:35:18.062 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10025msec) 00:35:18.062 slat (usec): min=5, max=177, avg=41.41, stdev=31.46 00:35:18.062 clat (usec): min=10890, max=61839, avg=34060.39, stdev=5613.74 00:35:18.062 lat (usec): min=10900, max=61853, avg=34101.80, stdev=5609.17 00:35:18.062 clat percentiles (usec): 00:35:18.062 | 1.00th=[18220], 5.00th=[28443], 10.00th=[31065], 20.00th=[31851], 00:35:18.062 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:35:18.062 | 70.00th=[33424], 80.00th=[36439], 90.00th=[42730], 95.00th=[43254], 00:35:18.062 | 99.00th=[55313], 99.50th=[60031], 99.90th=[61604], 99.95th=[61604], 00:35:18.062 | 99.99th=[61604] 00:35:18.062 bw ( KiB/s): min= 1408, max= 2112, per=4.19%, avg=1856.00, stdev=183.67, samples=19 00:35:18.062 iops : min= 352, max= 528, avg=464.00, stdev=45.92, samples=19 00:35:18.062 lat (msec) : 20=2.08%, 50=96.68%, 100=1.24% 00:35:18.062 cpu : usr=97.98%, sys=1.52%, ctx=23, majf=0, minf=63 00:35:18.062 IO depths : 1=2.6%, 2=5.3%, 4=14.3%, 8=67.6%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 issued rwts: total=4664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.062 filename2: (groupid=0, jobs=1): err= 0: pid=1906580: Tue Jul 23 06:31:09 2024 00:35:18.062 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:35:18.062 slat (usec): min=8, max=213, avg=39.79, stdev=26.41 00:35:18.062 clat (usec): min=8794, max=60468, avg=34298.75, stdev=4335.66 00:35:18.062 lat (usec): min=8819, max=60477, avg=34338.53, stdev=4335.68 00:35:18.062 clat percentiles (usec): 00:35:18.062 | 1.00th=[29754], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:18.062 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:18.062 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:35:18.062 | 99.00th=[44827], 99.50th=[49546], 99.90th=[57934], 99.95th=[60556], 00:35:18.062 | 99.99th=[60556] 00:35:18.062 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1838.95, stdev=184.21, samples=19 00:35:18.062 iops : min= 352, max= 512, avg=459.74, stdev=46.05, samples=19 00:35:18.062 lat (msec) : 10=0.15%, 20=0.41%, 50=99.03%, 100=0.41% 00:35:18.062 cpu : usr=97.42%, sys=1.81%, ctx=31, majf=0, minf=43 00:35:18.062 IO depths : 1=3.9%, 2=9.3%, 4=23.1%, 8=54.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.062 filename2: (groupid=0, jobs=1): err= 0: pid=1906581: Tue Jul 23 06:31:09 2024 00:35:18.062 read: IOPS=462, BW=1850KiB/s (1894kB/s)(18.1MiB/10011msec) 00:35:18.062 slat (usec): min=9, max=227, avg=53.91, stdev=22.73 00:35:18.062 clat (usec): min=13263, max=57123, avg=34114.20, stdev=4497.80 00:35:18.062 lat (usec): min=13277, max=57161, avg=34168.11, stdev=4494.75 00:35:18.062 clat percentiles (usec): 00:35:18.062 | 1.00th=[29492], 5.00th=[31327], 10.00th=[31851], 20.00th=[32113], 00:35:18.062 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:18.062 | 70.00th=[32900], 80.00th=[33817], 90.00th=[42730], 95.00th=[42730], 00:35:18.062 | 99.00th=[50594], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:35:18.062 | 99.99th=[56886] 00:35:18.062 bw ( KiB/s): min= 1408, max= 2048, per=4.16%, avg=1841.84, stdev=183.08, samples=19 00:35:18.062 iops : min= 352, max= 512, avg=460.42, stdev=45.78, samples=19 00:35:18.062 lat (msec) : 20=0.43%, 50=98.40%, 100=1.17% 00:35:18.062 cpu : usr=97.81%, sys=1.61%, ctx=28, majf=0, minf=49 00:35:18.062 IO depths : 1=5.4%, 2=11.6%, 4=24.8%, 8=51.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 issued rwts: total=4630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.062 filename2: (groupid=0, jobs=1): err= 0: pid=1906582: Tue Jul 23 06:31:09 2024 00:35:18.062 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10007msec) 00:35:18.062 slat (usec): min=8, max=145, avg=45.89, stdev=28.27 00:35:18.062 clat (usec): min=27494, max=48740, avg=34263.13, stdev=3772.63 00:35:18.062 lat (usec): min=27543, max=48770, avg=34309.03, stdev=3759.86 00:35:18.062 clat percentiles (usec): 00:35:18.062 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:18.062 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:18.062 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42730], 95.00th=[42730], 00:35:18.062 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:35:18.062 | 99.99th=[48497] 00:35:18.062 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1838.95, stdev=181.98, samples=19 00:35:18.062 iops : min= 352, max= 512, avg=459.74, stdev=45.49, samples=19 00:35:18.062 lat (msec) : 50=100.00% 00:35:18.062 cpu : usr=95.59%, sys=2.76%, ctx=95, majf=0, minf=59 00:35:18.062 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.062 filename2: (groupid=0, jobs=1): err= 0: pid=1906583: Tue Jul 23 06:31:09 2024 00:35:18.062 read: IOPS=461, BW=1848KiB/s (1892kB/s)(18.1MiB/10007msec) 00:35:18.062 slat (usec): min=11, max=181, avg=54.92, stdev=22.41 00:35:18.062 clat (usec): min=13147, max=62169, avg=34132.24, stdev=4231.09 00:35:18.062 lat (usec): min=13170, max=62203, avg=34187.16, stdev=4227.15 00:35:18.062 clat percentiles (usec): 00:35:18.062 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:18.062 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:18.062 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:35:18.062 | 99.00th=[43779], 99.50th=[44303], 99.90th=[62129], 99.95th=[62129], 00:35:18.062 | 99.99th=[62129] 00:35:18.062 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1838.32, stdev=172.07, samples=19 00:35:18.062 iops : min= 352, max= 512, avg=459.58, stdev=43.02, samples=19 00:35:18.062 lat (msec) : 20=0.35%, 50=99.26%, 100=0.39% 00:35:18.062 cpu : usr=98.07%, sys=1.53%, ctx=29, majf=0, minf=44 00:35:18.062 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.062 issued rwts: total=4622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.062 filename2: (groupid=0, jobs=1): err= 0: pid=1906584: Tue Jul 23 06:31:09 2024 00:35:18.062 read: IOPS=465, BW=1864KiB/s (1909kB/s)(18.2MiB/10001msec) 00:35:18.062 slat (usec): min=8, max=209, avg=30.18, stdev=22.77 00:35:18.062 clat (usec): min=10723, max=43737, avg=34063.10, stdev=4244.35 00:35:18.062 lat (usec): min=10765, max=43761, avg=34093.28, stdev=4242.07 00:35:18.062 clat percentiles (usec): 00:35:18.062 | 1.00th=[22152], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:35:18.062 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:18.062 | 70.00th=[33424], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:35:18.062 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:35:18.062 | 99.99th=[43779] 00:35:18.062 bw ( KiB/s): min= 1408, max= 2080, per=4.20%, avg=1860.79, stdev=189.42, samples=19 00:35:18.062 iops : min= 352, max= 520, avg=465.16, stdev=47.32, samples=19 00:35:18.062 lat (msec) : 20=0.71%, 50=99.29% 00:35:18.062 cpu : usr=97.33%, sys=1.78%, ctx=142, majf=0, minf=66 00:35:18.062 IO depths : 1=5.4%, 2=11.4%, 4=24.1%, 8=52.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.063 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.063 issued rwts: total=4660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.063 00:35:18.063 Run status group 0 (all jobs): 00:35:18.063 READ: bw=43.2MiB/s (45.3MB/s), 1835KiB/s-1872KiB/s (1879kB/s-1917kB/s), io=435MiB (456MB), run=10001-10061msec 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 bdev_null0 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 [2024-07-23 06:31:09.726130] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 bdev_null1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:18.063 { 00:35:18.063 "params": { 00:35:18.063 "name": "Nvme$subsystem", 00:35:18.063 "trtype": "$TEST_TRANSPORT", 00:35:18.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.063 "adrfam": "ipv4", 00:35:18.063 "trsvcid": "$NVMF_PORT", 00:35:18.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.063 "hdgst": ${hdgst:-false}, 00:35:18.063 "ddgst": ${ddgst:-false} 00:35:18.063 }, 00:35:18.063 "method": "bdev_nvme_attach_controller" 00:35:18.063 } 00:35:18.063 EOF 00:35:18.063 )") 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:18.063 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:18.064 { 00:35:18.064 "params": { 00:35:18.064 "name": "Nvme$subsystem", 00:35:18.064 "trtype": "$TEST_TRANSPORT", 00:35:18.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.064 "adrfam": "ipv4", 00:35:18.064 "trsvcid": "$NVMF_PORT", 00:35:18.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.064 "hdgst": ${hdgst:-false}, 00:35:18.064 "ddgst": ${ddgst:-false} 00:35:18.064 }, 00:35:18.064 "method": "bdev_nvme_attach_controller" 00:35:18.064 } 00:35:18.064 EOF 00:35:18.064 )") 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:18.064 "params": { 00:35:18.064 "name": "Nvme0", 00:35:18.064 "trtype": "tcp", 00:35:18.064 "traddr": "10.0.0.2", 00:35:18.064 "adrfam": "ipv4", 00:35:18.064 "trsvcid": "4420", 00:35:18.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.064 "hdgst": false, 00:35:18.064 "ddgst": false 00:35:18.064 }, 00:35:18.064 "method": "bdev_nvme_attach_controller" 00:35:18.064 },{ 00:35:18.064 "params": { 00:35:18.064 "name": "Nvme1", 00:35:18.064 "trtype": "tcp", 00:35:18.064 "traddr": "10.0.0.2", 00:35:18.064 "adrfam": "ipv4", 00:35:18.064 "trsvcid": "4420", 00:35:18.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:18.064 "hdgst": false, 00:35:18.064 "ddgst": false 00:35:18.064 }, 00:35:18.064 "method": "bdev_nvme_attach_controller" 00:35:18.064 }' 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:18.064 06:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.064 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:18.064 ... 00:35:18.064 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:18.064 ... 00:35:18.064 fio-3.35 00:35:18.064 Starting 4 threads 00:35:18.064 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.329 00:35:23.329 filename0: (groupid=0, jobs=1): err= 0: pid=1907965: Tue Jul 23 06:31:15 2024 00:35:23.329 read: IOPS=1635, BW=12.8MiB/s (13.4MB/s)(63.9MiB/5003msec) 00:35:23.329 slat (nsec): min=5344, max=57160, avg=13298.59, stdev=7230.32 00:35:23.329 clat (usec): min=2751, max=46142, avg=4846.84, stdev=1426.54 00:35:23.329 lat (usec): min=2762, max=46175, avg=4860.14, stdev=1426.49 00:35:23.329 clat percentiles (usec): 00:35:23.329 | 1.00th=[ 3752], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:35:23.329 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:35:23.329 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5342], 95.00th=[ 6063], 00:35:23.329 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 9372], 99.95th=[45876], 00:35:23.329 | 99.99th=[46400] 00:35:23.329 bw ( KiB/s): min=10517, max=13776, per=24.75%, avg=13082.10, stdev=953.73, samples=10 00:35:23.329 iops : min= 1314, max= 1722, avg=1635.20, stdev=119.40, samples=10 00:35:23.329 lat (msec) : 4=1.96%, 10=97.95%, 50=0.10% 00:35:23.329 cpu : usr=93.54%, sys=5.02%, ctx=285, majf=0, minf=9 00:35:23.329 IO depths : 1=0.1%, 2=1.4%, 4=72.2%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.329 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.329 issued rwts: total=8183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.329 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:23.329 filename0: (groupid=0, jobs=1): err= 0: pid=1907966: Tue Jul 23 06:31:15 2024 00:35:23.329 read: IOPS=1634, BW=12.8MiB/s (13.4MB/s)(63.9MiB/5002msec) 00:35:23.329 slat (usec): min=5, max=217, avg=11.39, stdev= 5.76 00:35:23.329 clat (usec): min=924, max=8817, avg=4861.33, stdev=780.46 00:35:23.329 lat (usec): min=943, max=8825, avg=4872.72, stdev=779.68 00:35:23.329 clat percentiles (usec): 00:35:23.329 | 1.00th=[ 2606], 5.00th=[ 4146], 10.00th=[ 4359], 20.00th=[ 4490], 00:35:23.329 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:35:23.329 | 70.00th=[ 4817], 80.00th=[ 5014], 90.00th=[ 5932], 95.00th=[ 6849], 00:35:23.329 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 8094], 99.95th=[ 8586], 00:35:23.329 | 99.99th=[ 8848] 00:35:23.329 bw ( KiB/s): min=12320, max=13648, per=24.73%, avg=13072.00, stdev=460.52, samples=10 00:35:23.330 iops : min= 1540, max= 1706, avg=1634.00, stdev=57.57, samples=10 00:35:23.330 lat (usec) : 1000=0.02% 00:35:23.330 lat (msec) : 4=3.56%, 10=96.42% 00:35:23.330 cpu : usr=95.00%, sys=4.52%, ctx=9, majf=0, minf=9 00:35:23.330 IO depths : 1=0.1%, 2=0.8%, 4=70.2%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.330 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.330 issued rwts: total=8175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.330 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:23.330 filename1: (groupid=0, jobs=1): err= 0: pid=1907967: Tue Jul 23 06:31:15 2024 00:35:23.330 read: IOPS=1690, BW=13.2MiB/s (13.8MB/s)(66.1MiB/5003msec) 00:35:23.330 slat (nsec): min=6340, max=53555, avg=10869.24, stdev=4917.96 00:35:23.330 clat (usec): min=1997, max=51274, avg=4696.79, stdev=1539.04 00:35:23.330 lat (usec): min=2005, max=51296, avg=4707.66, stdev=1539.02 00:35:23.330 clat percentiles (usec): 00:35:23.330 | 1.00th=[ 3195], 5.00th=[ 3654], 10.00th=[ 4113], 20.00th=[ 4359], 00:35:23.330 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4686], 60.00th=[ 4752], 00:35:23.330 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5735], 00:35:23.330 | 99.00th=[ 6587], 99.50th=[ 7177], 99.90th=[ 8291], 99.95th=[51119], 00:35:23.330 | 99.99th=[51119] 00:35:23.330 bw ( KiB/s): min=11616, max=14016, per=25.58%, avg=13521.60, stdev=689.53, samples=10 00:35:23.330 iops : min= 1452, max= 1752, avg=1690.20, stdev=86.19, samples=10 00:35:23.330 lat (msec) : 2=0.01%, 4=8.40%, 10=91.50%, 100=0.09% 00:35:23.330 cpu : usr=94.10%, sys=5.14%, ctx=6, majf=0, minf=0 00:35:23.330 IO depths : 1=0.1%, 2=4.1%, 4=69.1%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.330 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.330 issued rwts: total=8456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.330 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:23.330 filename1: (groupid=0, jobs=1): err= 0: pid=1907968: Tue Jul 23 06:31:15 2024 00:35:23.330 read: IOPS=1648, BW=12.9MiB/s (13.5MB/s)(64.4MiB/5001msec) 00:35:23.330 slat (nsec): min=5252, max=53469, avg=11848.78, stdev=5711.64 00:35:23.330 clat (usec): min=1882, max=8150, avg=4816.76, stdev=843.97 00:35:23.330 lat (usec): min=1899, max=8158, avg=4828.61, stdev=843.25 00:35:23.330 clat percentiles (usec): 00:35:23.330 | 1.00th=[ 2540], 5.00th=[ 3785], 10.00th=[ 4178], 20.00th=[ 4424], 00:35:23.330 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:35:23.330 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5997], 95.00th=[ 6915], 00:35:23.330 | 99.00th=[ 7570], 99.50th=[ 7635], 99.90th=[ 7898], 99.95th=[ 7963], 00:35:23.330 | 99.99th=[ 8160] 00:35:23.330 bw ( KiB/s): min=12352, max=15134, per=24.95%, avg=13188.60, stdev=818.55, samples=10 00:35:23.330 iops : min= 1544, max= 1891, avg=1648.50, stdev=102.12, samples=10 00:35:23.330 lat (msec) : 2=0.04%, 4=7.24%, 10=92.72% 00:35:23.330 cpu : usr=95.38%, sys=4.14%, ctx=8, majf=0, minf=9 00:35:23.330 IO depths : 1=0.1%, 2=2.3%, 4=67.9%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.330 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.330 issued rwts: total=8246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.330 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:23.330 00:35:23.330 Run status group 0 (all jobs): 00:35:23.330 READ: bw=51.6MiB/s (54.1MB/s), 12.8MiB/s-13.2MiB/s (13.4MB/s-13.8MB/s), io=258MiB (271MB), run=5001-5003msec 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.330 00:35:23.330 real 0m24.189s 00:35:23.330 user 4m28.678s 00:35:23.330 sys 0m8.462s 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 ************************************ 00:35:23.330 END TEST fio_dif_rand_params 00:35:23.330 ************************************ 00:35:23.330 06:31:16 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:23.330 06:31:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:23.330 06:31:16 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:23.330 06:31:16 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 ************************************ 00:35:23.330 START TEST fio_dif_digest 00:35:23.330 ************************************ 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 bdev_null0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.330 [2024-07-23 06:31:16.332451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.330 06:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:23.330 { 00:35:23.330 "params": { 00:35:23.330 "name": "Nvme$subsystem", 00:35:23.330 "trtype": "$TEST_TRANSPORT", 00:35:23.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.330 "adrfam": "ipv4", 00:35:23.330 "trsvcid": "$NVMF_PORT", 00:35:23.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.331 "hdgst": ${hdgst:-false}, 00:35:23.331 "ddgst": ${ddgst:-false} 00:35:23.331 }, 00:35:23.331 "method": "bdev_nvme_attach_controller" 00:35:23.331 } 00:35:23.331 EOF 00:35:23.331 )") 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:23.331 "params": { 00:35:23.331 "name": "Nvme0", 00:35:23.331 "trtype": "tcp", 00:35:23.331 "traddr": "10.0.0.2", 00:35:23.331 "adrfam": "ipv4", 00:35:23.331 "trsvcid": "4420", 00:35:23.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:23.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:23.331 "hdgst": true, 00:35:23.331 "ddgst": true 00:35:23.331 }, 00:35:23.331 "method": "bdev_nvme_attach_controller" 00:35:23.331 }' 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:23.331 06:31:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.331 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:23.331 ... 00:35:23.331 fio-3.35 00:35:23.331 Starting 3 threads 00:35:23.331 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.590 00:35:35.590 filename0: (groupid=0, jobs=1): err= 0: pid=1908720: Tue Jul 23 06:31:27 2024 00:35:35.590 read: IOPS=144, BW=18.1MiB/s (19.0MB/s)(182MiB/10047msec) 00:35:35.590 slat (nsec): min=5687, max=82423, avg=13069.79, stdev=2970.46 00:35:35.590 clat (msec): min=10, max=100, avg=20.66, stdev=11.48 00:35:35.590 lat (msec): min=10, max=100, avg=20.67, stdev=11.48 00:35:35.590 clat percentiles (msec): 00:35:35.590 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 17], 00:35:35.590 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 19], 00:35:35.590 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 21], 95.00th=[ 58], 00:35:35.590 | 99.00th=[ 61], 99.50th=[ 61], 99.90th=[ 100], 99.95th=[ 101], 00:35:35.590 | 99.99th=[ 101] 00:35:35.590 bw ( KiB/s): min=12544, max=23296, per=27.47%, avg=18598.40, stdev=2660.79, samples=20 00:35:35.590 iops : min= 98, max= 182, avg=145.30, stdev=20.79, samples=20 00:35:35.590 lat (msec) : 20=84.89%, 50=7.42%, 100=7.62%, 250=0.07% 00:35:35.590 cpu : usr=91.63%, sys=7.90%, ctx=21, majf=0, minf=139 00:35:35.590 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.590 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.590 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:35.590 filename0: (groupid=0, jobs=1): err= 0: pid=1908721: Tue Jul 23 06:31:27 2024 00:35:35.590 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(244MiB/10045msec) 00:35:35.590 slat (nsec): min=5776, max=35378, avg=13246.18, stdev=2203.81 00:35:35.590 clat (usec): min=6396, max=59946, avg=15354.73, stdev=6080.83 00:35:35.590 lat (usec): min=6408, max=59959, avg=15367.98, stdev=6080.96 00:35:35.590 clat percentiles (usec): 00:35:35.591 | 1.00th=[ 7177], 5.00th=[10028], 10.00th=[10552], 20.00th=[11731], 00:35:35.591 | 30.00th=[13698], 40.00th=[14746], 50.00th=[15401], 60.00th=[15926], 00:35:35.591 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:35:35.591 | 99.00th=[56361], 99.50th=[56886], 99.90th=[59507], 99.95th=[60031], 00:35:35.591 | 99.99th=[60031] 00:35:35.591 bw ( KiB/s): min=21760, max=27904, per=36.90%, avg=24988.20, stdev=1816.50, samples=20 00:35:35.591 iops : min= 170, max= 218, avg=195.20, stdev=14.18, samples=20 00:35:35.591 lat (msec) : 10=5.27%, 20=92.79%, 50=0.20%, 100=1.74% 00:35:35.591 cpu : usr=91.84%, sys=7.66%, ctx=25, majf=0, minf=195 00:35:35.591 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.591 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.591 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:35.591 filename0: (groupid=0, jobs=1): err= 0: pid=1908722: Tue Jul 23 06:31:27 2024 00:35:35.591 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10046msec) 00:35:35.591 slat (nsec): min=5794, max=35791, avg=13104.97, stdev=2213.25 00:35:35.591 clat (usec): min=7080, max=60089, avg=15788.32, stdev=6949.29 00:35:35.591 lat (usec): min=7092, max=60101, avg=15801.43, stdev=6949.39 00:35:35.591 clat percentiles (usec): 00:35:35.591 | 1.00th=[ 8094], 5.00th=[10290], 10.00th=[10814], 20.00th=[11863], 00:35:35.591 | 30.00th=[14091], 40.00th=[15008], 50.00th=[15533], 60.00th=[16057], 00:35:35.591 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17695], 95.00th=[18482], 00:35:35.591 | 99.00th=[57410], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:35:35.591 | 99.99th=[60031] 00:35:35.591 bw ( KiB/s): min=19456, max=28672, per=35.95%, avg=24345.60, stdev=2486.04, samples=20 00:35:35.591 iops : min= 152, max= 224, avg=190.20, stdev=19.42, samples=20 00:35:35.591 lat (msec) : 10=3.47%, 20=93.96%, 50=0.16%, 100=2.42% 00:35:35.591 cpu : usr=91.32%, sys=8.20%, ctx=22, majf=0, minf=143 00:35:35.591 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.591 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.591 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:35.591 00:35:35.591 Run status group 0 (all jobs): 00:35:35.591 READ: bw=66.1MiB/s (69.3MB/s), 18.1MiB/s-24.3MiB/s (19.0MB/s-25.5MB/s), io=664MiB (697MB), run=10045-10047msec 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.591 00:35:35.591 real 0m11.039s 00:35:35.591 user 0m28.793s 00:35:35.591 sys 0m2.650s 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:35.591 06:31:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.591 ************************************ 00:35:35.591 END TEST fio_dif_digest 00:35:35.591 ************************************ 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:35.591 06:31:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:35.591 06:31:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:35.591 rmmod nvme_tcp 00:35:35.591 rmmod nvme_fabrics 00:35:35.591 rmmod nvme_keyring 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1902790 ']' 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1902790 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1902790 ']' 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1902790 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1902790 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1902790' 00:35:35.591 killing process with pid 1902790 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1902790 00:35:35.591 06:31:27 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1902790 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:35.591 06:31:27 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:35.591 Waiting for block devices as requested 00:35:35.591 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:35.591 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:35.591 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:35.850 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:35.850 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:35.850 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:36.109 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:36.109 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:36.109 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:36.109 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:36.367 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:36.367 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:36.367 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:36.367 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:36.625 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:36.625 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:36.625 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:36.884 06:31:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:36.884 06:31:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:36.884 06:31:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:36.884 06:31:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:36.884 06:31:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.884 06:31:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:36.885 06:31:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.786 06:31:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:38.786 00:35:38.786 real 1m6.312s 00:35:38.786 user 6m24.562s 00:35:38.786 sys 0m20.208s 00:35:38.786 06:31:32 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:38.786 06:31:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:38.786 ************************************ 00:35:38.786 END TEST nvmf_dif 00:35:38.786 ************************************ 00:35:38.786 06:31:32 -- common/autotest_common.sh@1142 -- # return 0 00:35:38.786 06:31:32 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:38.786 06:31:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:38.786 06:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:38.786 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:35:38.786 ************************************ 00:35:38.786 START TEST nvmf_abort_qd_sizes 00:35:38.786 ************************************ 00:35:38.786 06:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:39.045 * Looking for test storage... 00:35:39.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:39.045 06:31:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:39.046 06:31:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:40.947 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:40.947 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:40.948 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:40.948 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:40.948 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:40.948 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:41.207 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:41.207 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:41.207 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:41.207 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:41.207 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:41.207 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:41.207 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:41.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:35:41.207 00:35:41.207 --- 10.0.0.2 ping statistics --- 00:35:41.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.207 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:35:41.207 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:41.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:35:41.207 00:35:41.207 --- 10.0.0.1 ping statistics --- 00:35:41.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.207 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:35:41.208 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.208 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:41.208 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:41.208 06:31:34 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:42.584 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:42.584 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:42.584 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:42.584 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:42.584 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:42.584 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:42.584 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:42.584 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:42.585 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:42.585 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:42.585 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:42.585 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:42.585 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:42.585 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:42.585 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:42.585 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:43.525 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1913614 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1913614 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1913614 ']' 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:43.525 06:31:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:43.525 [2024-07-23 06:31:36.757847] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:35:43.525 [2024-07-23 06:31:36.757940] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:43.525 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.525 [2024-07-23 06:31:36.796835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:43.525 [2024-07-23 06:31:36.822743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:43.785 [2024-07-23 06:31:36.914283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:43.785 [2024-07-23 06:31:36.914343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:43.785 [2024-07-23 06:31:36.914371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:43.785 [2024-07-23 06:31:36.914382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:43.785 [2024-07-23 06:31:36.914391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:43.785 [2024-07-23 06:31:36.914478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.785 [2024-07-23 06:31:36.914546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:43.785 [2024-07-23 06:31:36.914622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.785 [2024-07-23 06:31:36.914621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:43.785 06:31:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:43.785 ************************************ 00:35:43.785 START TEST spdk_target_abort 00:35:43.785 ************************************ 00:35:43.785 06:31:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:35:43.785 06:31:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:43.785 06:31:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:43.785 06:31:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.785 06:31:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:47.073 spdk_targetn1 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:47.073 [2024-07-23 06:31:39.927129] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:47.073 [2024-07-23 06:31:39.959339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:47.073 06:31:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:47.073 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.371 Initializing NVMe Controllers 00:35:50.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:50.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:50.372 Initialization complete. Launching workers. 00:35:50.372 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9452, failed: 0 00:35:50.372 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 8237 00:35:50.372 success 772, unsuccess 443, failed 0 00:35:50.372 06:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:50.372 06:31:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:50.372 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.660 Initializing NVMe Controllers 00:35:53.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:53.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:53.660 Initialization complete. Launching workers. 00:35:53.660 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8724, failed: 0 00:35:53.660 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1265, failed to submit 7459 00:35:53.660 success 348, unsuccess 917, failed 0 00:35:53.660 06:31:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:53.660 06:31:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.660 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.952 Initializing NVMe Controllers 00:35:56.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:56.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:56.952 Initialization complete. Launching workers. 00:35:56.952 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29542, failed: 0 00:35:56.952 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2547, failed to submit 26995 00:35:56.952 success 490, unsuccess 2057, failed 0 00:35:56.952 06:31:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:56.952 06:31:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.952 06:31:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.952 06:31:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.952 06:31:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:56.952 06:31:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.952 06:31:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1913614 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1913614 ']' 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1913614 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1913614 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1913614' 00:35:57.887 killing process with pid 1913614 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1913614 00:35:57.887 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1913614 00:35:58.146 00:35:58.146 real 0m14.212s 00:35:58.146 user 0m52.158s 00:35:58.146 sys 0m3.262s 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.146 ************************************ 00:35:58.146 END TEST spdk_target_abort 00:35:58.146 ************************************ 00:35:58.146 06:31:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:58.146 06:31:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:58.146 06:31:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:58.146 06:31:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:58.146 06:31:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:58.146 ************************************ 00:35:58.146 START TEST kernel_target_abort 00:35:58.146 ************************************ 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:58.146 06:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:59.087 Waiting for block devices as requested 00:35:59.345 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:59.345 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:59.604 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:59.604 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:59.604 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:59.604 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:59.862 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:59.862 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:59.862 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:59.862 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:00.128 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:00.128 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:00.128 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:00.128 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:00.387 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:00.387 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:00.387 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:00.646 No valid GPT data, bailing 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:00.646 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:00.647 00:36:00.647 Discovery Log Number of Records 2, Generation counter 2 00:36:00.647 =====Discovery Log Entry 0====== 00:36:00.647 trtype: tcp 00:36:00.647 adrfam: ipv4 00:36:00.647 subtype: current discovery subsystem 00:36:00.647 treq: not specified, sq flow control disable supported 00:36:00.647 portid: 1 00:36:00.647 trsvcid: 4420 00:36:00.647 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:00.647 traddr: 10.0.0.1 00:36:00.647 eflags: none 00:36:00.647 sectype: none 00:36:00.647 =====Discovery Log Entry 1====== 00:36:00.647 trtype: tcp 00:36:00.647 adrfam: ipv4 00:36:00.647 subtype: nvme subsystem 00:36:00.647 treq: not specified, sq flow control disable supported 00:36:00.647 portid: 1 00:36:00.647 trsvcid: 4420 00:36:00.647 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:00.647 traddr: 10.0.0.1 00:36:00.647 eflags: none 00:36:00.647 sectype: none 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:00.647 06:31:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.647 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.931 Initializing NVMe Controllers 00:36:03.931 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:03.931 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:03.931 Initialization complete. Launching workers. 00:36:03.931 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28454, failed: 0 00:36:03.931 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28454, failed to submit 0 00:36:03.931 success 0, unsuccess 28454, failed 0 00:36:03.931 06:31:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.931 06:31:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.931 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.220 Initializing NVMe Controllers 00:36:07.220 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:07.220 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:07.220 Initialization complete. Launching workers. 00:36:07.220 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58127, failed: 0 00:36:07.220 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14638, failed to submit 43489 00:36:07.220 success 0, unsuccess 14638, failed 0 00:36:07.220 06:32:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:07.220 06:32:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:07.220 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.539 Initializing NVMe Controllers 00:36:10.539 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:10.539 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:10.539 Initialization complete. Launching workers. 00:36:10.539 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56467, failed: 0 00:36:10.539 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14078, failed to submit 42389 00:36:10.539 success 0, unsuccess 14078, failed 0 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:10.539 06:32:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:11.106 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:11.106 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:11.106 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:11.106 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:11.106 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:11.106 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:11.364 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:11.364 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:11.364 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:11.364 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:11.364 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:11.364 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:11.364 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:11.365 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:11.365 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:11.365 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:12.300 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:12.300 00:36:12.301 real 0m14.216s 00:36:12.301 user 0m4.717s 00:36:12.301 sys 0m3.376s 00:36:12.301 06:32:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:12.301 06:32:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.301 ************************************ 00:36:12.301 END TEST kernel_target_abort 00:36:12.301 ************************************ 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:12.301 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:12.301 rmmod nvme_tcp 00:36:12.301 rmmod nvme_fabrics 00:36:12.301 rmmod nvme_keyring 00:36:12.558 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:12.558 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:12.558 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:12.558 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1913614 ']' 00:36:12.558 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1913614 00:36:12.558 06:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1913614 ']' 00:36:12.559 06:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1913614 00:36:12.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1913614) - No such process 00:36:12.559 06:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1913614 is not found' 00:36:12.559 Process with pid 1913614 is not found 00:36:12.559 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:12.559 06:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:13.494 Waiting for block devices as requested 00:36:13.494 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:13.753 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:13.753 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:13.753 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:14.012 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:14.012 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:14.012 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:14.012 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:14.273 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:14.273 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:14.273 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:14.273 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:14.273 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:14.534 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:14.534 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:14.534 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:14.534 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:14.793 06:32:07 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:14.793 06:32:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:14.793 06:32:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:14.793 06:32:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:14.793 06:32:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.793 06:32:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:14.793 06:32:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.699 06:32:09 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:16.699 00:36:16.699 real 0m37.911s 00:36:16.699 user 0m59.069s 00:36:16.699 sys 0m10.051s 00:36:16.699 06:32:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:16.699 06:32:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:16.699 ************************************ 00:36:16.699 END TEST nvmf_abort_qd_sizes 00:36:16.699 ************************************ 00:36:16.699 06:32:10 -- common/autotest_common.sh@1142 -- # return 0 00:36:16.699 06:32:10 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:16.699 06:32:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:16.699 06:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:16.699 06:32:10 -- common/autotest_common.sh@10 -- # set +x 00:36:16.958 ************************************ 00:36:16.958 START TEST keyring_file 00:36:16.958 ************************************ 00:36:16.958 06:32:10 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:16.958 * Looking for test storage... 00:36:16.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.958 06:32:10 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.958 06:32:10 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.958 06:32:10 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.958 06:32:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.958 06:32:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.958 06:32:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.958 06:32:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:16.958 06:32:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.j7P9V2QrMZ 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.j7P9V2QrMZ 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.j7P9V2QrMZ 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.j7P9V2QrMZ 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Fg524TJVOZ 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:16.958 06:32:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Fg524TJVOZ 00:36:16.958 06:32:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Fg524TJVOZ 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Fg524TJVOZ 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@30 -- # tgtpid=1919364 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:16.958 06:32:10 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1919364 00:36:16.958 06:32:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1919364 ']' 00:36:16.958 06:32:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.958 06:32:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:16.958 06:32:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.958 06:32:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:16.958 06:32:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:16.958 [2024-07-23 06:32:10.242132] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:36:16.958 [2024-07-23 06:32:10.242232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919364 ] 00:36:16.958 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.958 [2024-07-23 06:32:10.275348] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:17.219 [2024-07-23 06:32:10.302421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.219 [2024-07-23 06:32:10.388277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:17.478 06:32:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:17.478 [2024-07-23 06:32:10.631087] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.478 null0 00:36:17.478 [2024-07-23 06:32:10.663116] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:17.478 [2024-07-23 06:32:10.663546] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:17.478 [2024-07-23 06:32:10.671115] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.478 06:32:10 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.478 06:32:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:17.478 [2024-07-23 06:32:10.683136] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:17.478 request: 00:36:17.478 { 00:36:17.479 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.479 "secure_channel": false, 00:36:17.479 "listen_address": { 00:36:17.479 "trtype": "tcp", 00:36:17.479 "traddr": "127.0.0.1", 00:36:17.479 "trsvcid": "4420" 00:36:17.479 }, 00:36:17.479 "method": "nvmf_subsystem_add_listener", 00:36:17.479 "req_id": 1 00:36:17.479 } 00:36:17.479 Got JSON-RPC error response 00:36:17.479 response: 00:36:17.479 { 00:36:17.479 "code": -32602, 00:36:17.479 "message": "Invalid parameters" 00:36:17.479 } 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:17.479 06:32:10 keyring_file -- keyring/file.sh@46 -- # bperfpid=1919376 00:36:17.479 06:32:10 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1919376 /var/tmp/bperf.sock 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1919376 ']' 00:36:17.479 06:32:10 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:17.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:17.479 06:32:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:17.479 [2024-07-23 06:32:10.731648] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:36:17.479 [2024-07-23 06:32:10.731735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919376 ] 00:36:17.479 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.479 [2024-07-23 06:32:10.762835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:17.479 [2024-07-23 06:32:10.794709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.737 [2024-07-23 06:32:10.886778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.737 06:32:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:17.737 06:32:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:17.737 06:32:10 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j7P9V2QrMZ 00:36:17.737 06:32:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j7P9V2QrMZ 00:36:17.995 06:32:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Fg524TJVOZ 00:36:17.995 06:32:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Fg524TJVOZ 00:36:18.254 06:32:11 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:18.254 06:32:11 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:18.254 06:32:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.254 06:32:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.254 06:32:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.513 06:32:11 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.j7P9V2QrMZ == \/\t\m\p\/\t\m\p\.\j\7\P\9\V\2\Q\r\M\Z ]] 00:36:18.513 06:32:11 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:18.513 06:32:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:18.513 06:32:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.513 06:32:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.513 06:32:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:18.771 06:32:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Fg524TJVOZ == \/\t\m\p\/\t\m\p\.\F\g\5\2\4\T\J\V\O\Z ]] 00:36:18.771 06:32:12 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:18.771 06:32:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.771 06:32:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.771 06:32:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.771 06:32:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.771 06:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.029 06:32:12 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:19.029 06:32:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:19.029 06:32:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:19.029 06:32:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.029 06:32:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.029 06:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.029 06:32:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.286 06:32:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:19.287 06:32:12 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.287 06:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.545 [2024-07-23 06:32:12.743509] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:19.545 nvme0n1 00:36:19.545 06:32:12 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:19.545 06:32:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:19.545 06:32:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.545 06:32:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.545 06:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.545 06:32:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.804 06:32:13 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:19.804 06:32:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:19.804 06:32:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:19.804 06:32:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.804 06:32:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.804 06:32:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.804 06:32:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:20.062 06:32:13 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:20.062 06:32:13 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:20.321 Running I/O for 1 seconds... 00:36:21.270 00:36:21.270 Latency(us) 00:36:21.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.270 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:21.270 nvme0n1 : 1.03 4331.18 16.92 0.00 0.00 29192.46 9563.40 46603.38 00:36:21.270 =================================================================================================================== 00:36:21.270 Total : 4331.18 16.92 0.00 0.00 29192.46 9563.40 46603.38 00:36:21.270 0 00:36:21.270 06:32:14 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:21.270 06:32:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:21.528 06:32:14 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:21.528 06:32:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:21.528 06:32:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:21.528 06:32:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.528 06:32:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.528 06:32:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:21.787 06:32:14 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:21.787 06:32:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:21.787 06:32:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:21.787 06:32:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:21.787 06:32:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.787 06:32:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.787 06:32:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.046 06:32:15 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:22.046 06:32:15 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.046 06:32:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:22.046 06:32:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.046 06:32:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:22.046 06:32:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:22.046 06:32:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:22.046 06:32:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:22.047 06:32:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.047 06:32:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.306 [2024-07-23 06:32:15.456869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:22.306 [2024-07-23 06:32:15.457106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10154e0 (107): Transport endpoint is not connected 00:36:22.306 [2024-07-23 06:32:15.458092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10154e0 (9): Bad file descriptor 00:36:22.306 [2024-07-23 06:32:15.459090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:22.306 [2024-07-23 06:32:15.459113] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:22.306 [2024-07-23 06:32:15.459129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:22.306 request: 00:36:22.306 { 00:36:22.306 "name": "nvme0", 00:36:22.306 "trtype": "tcp", 00:36:22.306 "traddr": "127.0.0.1", 00:36:22.306 "adrfam": "ipv4", 00:36:22.306 "trsvcid": "4420", 00:36:22.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:22.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:22.306 "prchk_reftag": false, 00:36:22.306 "prchk_guard": false, 00:36:22.306 "hdgst": false, 00:36:22.306 "ddgst": false, 00:36:22.306 "psk": "key1", 00:36:22.306 "method": "bdev_nvme_attach_controller", 00:36:22.306 "req_id": 1 00:36:22.306 } 00:36:22.306 Got JSON-RPC error response 00:36:22.306 response: 00:36:22.306 { 00:36:22.306 "code": -5, 00:36:22.306 "message": "Input/output error" 00:36:22.306 } 00:36:22.306 06:32:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:22.306 06:32:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:22.306 06:32:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:22.306 06:32:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:22.306 06:32:15 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:22.306 06:32:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.306 06:32:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.306 06:32:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.306 06:32:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.306 06:32:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.565 06:32:15 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:22.565 06:32:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:22.565 06:32:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.565 06:32:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.565 06:32:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.565 06:32:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.565 06:32:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.823 06:32:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:22.823 06:32:15 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:22.823 06:32:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:23.082 06:32:16 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:23.082 06:32:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:23.341 06:32:16 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:23.341 06:32:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.341 06:32:16 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:23.600 06:32:16 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:23.600 06:32:16 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.j7P9V2QrMZ 00:36:23.600 06:32:16 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.j7P9V2QrMZ 00:36:23.600 06:32:16 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:23.600 06:32:16 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.j7P9V2QrMZ 00:36:23.600 06:32:16 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:23.600 06:32:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:23.600 06:32:16 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:23.600 06:32:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:23.600 06:32:16 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j7P9V2QrMZ 00:36:23.600 06:32:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j7P9V2QrMZ 00:36:23.858 [2024-07-23 06:32:16.977027] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.j7P9V2QrMZ': 0100660 00:36:23.858 [2024-07-23 06:32:16.977071] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:23.858 request: 00:36:23.858 { 00:36:23.858 "name": "key0", 00:36:23.859 "path": "/tmp/tmp.j7P9V2QrMZ", 00:36:23.859 "method": "keyring_file_add_key", 00:36:23.859 "req_id": 1 00:36:23.859 } 00:36:23.859 Got JSON-RPC error response 00:36:23.859 response: 00:36:23.859 { 00:36:23.859 "code": -1, 00:36:23.859 "message": "Operation not permitted" 00:36:23.859 } 00:36:23.859 06:32:16 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:23.859 06:32:16 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:23.859 06:32:16 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:23.859 06:32:16 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:23.859 06:32:16 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.j7P9V2QrMZ 00:36:23.859 06:32:16 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j7P9V2QrMZ 00:36:23.859 06:32:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j7P9V2QrMZ 00:36:24.117 06:32:17 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.j7P9V2QrMZ 00:36:24.117 06:32:17 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:24.117 06:32:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:24.117 06:32:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:24.117 06:32:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.117 06:32:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.117 06:32:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:24.376 06:32:17 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:24.376 06:32:17 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.376 06:32:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:24.376 06:32:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.376 06:32:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:24.376 06:32:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:24.376 06:32:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:24.376 06:32:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:24.376 06:32:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.376 06:32:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.376 [2024-07-23 06:32:17.715024] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.j7P9V2QrMZ': No such file or directory 00:36:24.376 [2024-07-23 06:32:17.715059] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:24.376 [2024-07-23 06:32:17.715095] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:24.376 [2024-07-23 06:32:17.715106] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:24.376 [2024-07-23 06:32:17.715117] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:24.376 request: 00:36:24.376 { 00:36:24.376 "name": "nvme0", 00:36:24.376 "trtype": "tcp", 00:36:24.376 "traddr": "127.0.0.1", 00:36:24.376 "adrfam": "ipv4", 00:36:24.376 "trsvcid": "4420", 00:36:24.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.376 "prchk_reftag": false, 00:36:24.376 "prchk_guard": false, 00:36:24.376 "hdgst": false, 00:36:24.376 "ddgst": false, 00:36:24.376 "psk": "key0", 00:36:24.376 "method": "bdev_nvme_attach_controller", 00:36:24.376 "req_id": 1 00:36:24.376 } 00:36:24.376 Got JSON-RPC error response 00:36:24.376 response: 00:36:24.376 { 00:36:24.376 "code": -19, 00:36:24.376 "message": "No such device" 00:36:24.376 } 00:36:24.636 06:32:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:24.636 06:32:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:24.636 06:32:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:24.636 06:32:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:24.636 06:32:17 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:24.636 06:32:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:24.636 06:32:17 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:24.636 06:32:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:24.636 06:32:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:24.636 06:32:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:24.637 06:32:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:24.895 06:32:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:24.895 06:32:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u8SzJVEFot 00:36:24.895 06:32:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:24.895 06:32:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:24.895 06:32:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:24.896 06:32:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:24.896 06:32:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:24.896 06:32:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:24.896 06:32:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:24.896 06:32:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u8SzJVEFot 00:36:24.896 06:32:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u8SzJVEFot 00:36:24.896 06:32:18 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.u8SzJVEFot 00:36:24.896 06:32:18 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u8SzJVEFot 00:36:24.896 06:32:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u8SzJVEFot 00:36:25.154 06:32:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:25.154 06:32:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:25.413 nvme0n1 00:36:25.413 06:32:18 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:25.413 06:32:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.413 06:32:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.413 06:32:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.413 06:32:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.413 06:32:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.673 06:32:18 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:25.673 06:32:18 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:25.673 06:32:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:25.933 06:32:19 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:25.933 06:32:19 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:25.933 06:32:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.933 06:32:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.933 06:32:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:26.192 06:32:19 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:26.192 06:32:19 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:26.192 06:32:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:26.192 06:32:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:26.192 06:32:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.192 06:32:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.192 06:32:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:26.451 06:32:19 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:26.451 06:32:19 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:26.451 06:32:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:26.709 06:32:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:26.709 06:32:19 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:26.709 06:32:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.968 06:32:20 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:26.968 06:32:20 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u8SzJVEFot 00:36:26.968 06:32:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u8SzJVEFot 00:36:27.226 06:32:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Fg524TJVOZ 00:36:27.226 06:32:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Fg524TJVOZ 00:36:27.486 06:32:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.486 06:32:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.745 nvme0n1 00:36:27.745 06:32:20 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:27.745 06:32:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:28.005 06:32:21 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:28.005 "subsystems": [ 00:36:28.005 { 00:36:28.005 "subsystem": "keyring", 00:36:28.005 "config": [ 00:36:28.005 { 00:36:28.005 "method": "keyring_file_add_key", 00:36:28.005 "params": { 00:36:28.005 "name": "key0", 00:36:28.005 "path": "/tmp/tmp.u8SzJVEFot" 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "method": "keyring_file_add_key", 00:36:28.005 "params": { 00:36:28.005 "name": "key1", 00:36:28.005 "path": "/tmp/tmp.Fg524TJVOZ" 00:36:28.005 } 00:36:28.005 } 00:36:28.005 ] 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "subsystem": "iobuf", 00:36:28.005 "config": [ 00:36:28.005 { 00:36:28.005 "method": "iobuf_set_options", 00:36:28.005 "params": { 00:36:28.005 "small_pool_count": 8192, 00:36:28.005 "large_pool_count": 1024, 00:36:28.005 "small_bufsize": 8192, 00:36:28.005 "large_bufsize": 135168 00:36:28.005 } 00:36:28.005 } 00:36:28.005 ] 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "subsystem": "sock", 00:36:28.005 "config": [ 00:36:28.005 { 00:36:28.005 "method": "sock_set_default_impl", 00:36:28.005 "params": { 00:36:28.005 "impl_name": "posix" 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "method": "sock_impl_set_options", 00:36:28.005 "params": { 00:36:28.005 "impl_name": "ssl", 00:36:28.005 "recv_buf_size": 4096, 00:36:28.005 "send_buf_size": 4096, 00:36:28.005 "enable_recv_pipe": true, 00:36:28.005 "enable_quickack": false, 00:36:28.005 "enable_placement_id": 0, 00:36:28.005 "enable_zerocopy_send_server": true, 00:36:28.005 "enable_zerocopy_send_client": false, 00:36:28.005 "zerocopy_threshold": 0, 00:36:28.005 "tls_version": 0, 00:36:28.005 "enable_ktls": false 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "method": "sock_impl_set_options", 00:36:28.005 "params": { 00:36:28.005 "impl_name": "posix", 00:36:28.005 "recv_buf_size": 2097152, 00:36:28.005 "send_buf_size": 2097152, 00:36:28.005 "enable_recv_pipe": true, 00:36:28.005 "enable_quickack": false, 00:36:28.005 "enable_placement_id": 0, 00:36:28.005 "enable_zerocopy_send_server": true, 00:36:28.005 "enable_zerocopy_send_client": false, 00:36:28.005 "zerocopy_threshold": 0, 00:36:28.005 "tls_version": 0, 00:36:28.005 "enable_ktls": false 00:36:28.005 } 00:36:28.005 } 00:36:28.005 ] 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "subsystem": "vmd", 00:36:28.005 "config": [] 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "subsystem": "accel", 00:36:28.005 "config": [ 00:36:28.005 { 00:36:28.005 "method": "accel_set_options", 00:36:28.005 "params": { 00:36:28.005 "small_cache_size": 128, 00:36:28.005 "large_cache_size": 16, 00:36:28.005 "task_count": 2048, 00:36:28.005 "sequence_count": 2048, 00:36:28.005 "buf_count": 2048 00:36:28.005 } 00:36:28.005 } 00:36:28.005 ] 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "subsystem": "bdev", 00:36:28.005 "config": [ 00:36:28.005 { 00:36:28.005 "method": "bdev_set_options", 00:36:28.005 "params": { 00:36:28.005 "bdev_io_pool_size": 65535, 00:36:28.005 "bdev_io_cache_size": 256, 00:36:28.005 "bdev_auto_examine": true, 00:36:28.005 "iobuf_small_cache_size": 128, 00:36:28.005 "iobuf_large_cache_size": 16 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "method": "bdev_raid_set_options", 00:36:28.005 "params": { 00:36:28.005 "process_window_size_kb": 1024, 00:36:28.005 "process_max_bandwidth_mb_sec": 0 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "method": "bdev_iscsi_set_options", 00:36:28.005 "params": { 00:36:28.005 "timeout_sec": 30 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "method": "bdev_nvme_set_options", 00:36:28.005 "params": { 00:36:28.005 "action_on_timeout": "none", 00:36:28.005 "timeout_us": 0, 00:36:28.005 "timeout_admin_us": 0, 00:36:28.005 "keep_alive_timeout_ms": 10000, 00:36:28.005 "arbitration_burst": 0, 00:36:28.005 "low_priority_weight": 0, 00:36:28.005 "medium_priority_weight": 0, 00:36:28.005 "high_priority_weight": 0, 00:36:28.005 "nvme_adminq_poll_period_us": 10000, 00:36:28.005 "nvme_ioq_poll_period_us": 0, 00:36:28.005 "io_queue_requests": 512, 00:36:28.005 "delay_cmd_submit": true, 00:36:28.005 "transport_retry_count": 4, 00:36:28.005 "bdev_retry_count": 3, 00:36:28.005 "transport_ack_timeout": 0, 00:36:28.005 "ctrlr_loss_timeout_sec": 0, 00:36:28.005 "reconnect_delay_sec": 0, 00:36:28.005 "fast_io_fail_timeout_sec": 0, 00:36:28.005 "disable_auto_failback": false, 00:36:28.005 "generate_uuids": false, 00:36:28.005 "transport_tos": 0, 00:36:28.005 "nvme_error_stat": false, 00:36:28.005 "rdma_srq_size": 0, 00:36:28.005 "io_path_stat": false, 00:36:28.005 "allow_accel_sequence": false, 00:36:28.005 "rdma_max_cq_size": 0, 00:36:28.005 "rdma_cm_event_timeout_ms": 0, 00:36:28.005 "dhchap_digests": [ 00:36:28.005 "sha256", 00:36:28.005 "sha384", 00:36:28.005 "sha512" 00:36:28.005 ], 00:36:28.005 "dhchap_dhgroups": [ 00:36:28.005 "null", 00:36:28.005 "ffdhe2048", 00:36:28.005 "ffdhe3072", 00:36:28.005 "ffdhe4096", 00:36:28.005 "ffdhe6144", 00:36:28.005 "ffdhe8192" 00:36:28.005 ] 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "method": "bdev_nvme_attach_controller", 00:36:28.005 "params": { 00:36:28.005 "name": "nvme0", 00:36:28.005 "trtype": "TCP", 00:36:28.005 "adrfam": "IPv4", 00:36:28.005 "traddr": "127.0.0.1", 00:36:28.005 "trsvcid": "4420", 00:36:28.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:28.005 "prchk_reftag": false, 00:36:28.005 "prchk_guard": false, 00:36:28.005 "ctrlr_loss_timeout_sec": 0, 00:36:28.005 "reconnect_delay_sec": 0, 00:36:28.005 "fast_io_fail_timeout_sec": 0, 00:36:28.005 "psk": "key0", 00:36:28.005 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:28.005 "hdgst": false, 00:36:28.005 "ddgst": false 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.005 "method": "bdev_nvme_set_hotplug", 00:36:28.005 "params": { 00:36:28.005 "period_us": 100000, 00:36:28.005 "enable": false 00:36:28.005 } 00:36:28.005 }, 00:36:28.005 { 00:36:28.006 "method": "bdev_wait_for_examine" 00:36:28.006 } 00:36:28.006 ] 00:36:28.006 }, 00:36:28.006 { 00:36:28.006 "subsystem": "nbd", 00:36:28.006 "config": [] 00:36:28.006 } 00:36:28.006 ] 00:36:28.006 }' 00:36:28.006 06:32:21 keyring_file -- keyring/file.sh@114 -- # killprocess 1919376 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1919376 ']' 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1919376 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1919376 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1919376' 00:36:28.006 killing process with pid 1919376 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@967 -- # kill 1919376 00:36:28.006 Received shutdown signal, test time was about 1.000000 seconds 00:36:28.006 00:36:28.006 Latency(us) 00:36:28.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.006 =================================================================================================================== 00:36:28.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:28.006 06:32:21 keyring_file -- common/autotest_common.sh@972 -- # wait 1919376 00:36:28.265 06:32:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=1920796 00:36:28.265 06:32:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1920796 /var/tmp/bperf.sock 00:36:28.265 06:32:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1920796 ']' 00:36:28.265 06:32:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:28.265 06:32:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:28.265 06:32:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:28.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:28.265 06:32:21 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:28.265 06:32:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:28.265 06:32:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:28.265 06:32:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:28.265 "subsystems": [ 00:36:28.265 { 00:36:28.265 "subsystem": "keyring", 00:36:28.265 "config": [ 00:36:28.265 { 00:36:28.265 "method": "keyring_file_add_key", 00:36:28.265 "params": { 00:36:28.265 "name": "key0", 00:36:28.265 "path": "/tmp/tmp.u8SzJVEFot" 00:36:28.265 } 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "method": "keyring_file_add_key", 00:36:28.265 "params": { 00:36:28.265 "name": "key1", 00:36:28.265 "path": "/tmp/tmp.Fg524TJVOZ" 00:36:28.265 } 00:36:28.265 } 00:36:28.265 ] 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "subsystem": "iobuf", 00:36:28.265 "config": [ 00:36:28.265 { 00:36:28.265 "method": "iobuf_set_options", 00:36:28.265 "params": { 00:36:28.265 "small_pool_count": 8192, 00:36:28.265 "large_pool_count": 1024, 00:36:28.265 "small_bufsize": 8192, 00:36:28.265 "large_bufsize": 135168 00:36:28.265 } 00:36:28.265 } 00:36:28.265 ] 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "subsystem": "sock", 00:36:28.265 "config": [ 00:36:28.265 { 00:36:28.265 "method": "sock_set_default_impl", 00:36:28.265 "params": { 00:36:28.265 "impl_name": "posix" 00:36:28.265 } 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "method": "sock_impl_set_options", 00:36:28.265 "params": { 00:36:28.265 "impl_name": "ssl", 00:36:28.265 "recv_buf_size": 4096, 00:36:28.265 "send_buf_size": 4096, 00:36:28.265 "enable_recv_pipe": true, 00:36:28.265 "enable_quickack": false, 00:36:28.265 "enable_placement_id": 0, 00:36:28.265 "enable_zerocopy_send_server": true, 00:36:28.265 "enable_zerocopy_send_client": false, 00:36:28.265 "zerocopy_threshold": 0, 00:36:28.265 "tls_version": 0, 00:36:28.265 "enable_ktls": false 00:36:28.265 } 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "method": "sock_impl_set_options", 00:36:28.265 "params": { 00:36:28.265 "impl_name": "posix", 00:36:28.265 "recv_buf_size": 2097152, 00:36:28.265 "send_buf_size": 2097152, 00:36:28.265 "enable_recv_pipe": true, 00:36:28.265 "enable_quickack": false, 00:36:28.265 "enable_placement_id": 0, 00:36:28.265 "enable_zerocopy_send_server": true, 00:36:28.265 "enable_zerocopy_send_client": false, 00:36:28.265 "zerocopy_threshold": 0, 00:36:28.265 "tls_version": 0, 00:36:28.265 "enable_ktls": false 00:36:28.265 } 00:36:28.265 } 00:36:28.265 ] 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "subsystem": "vmd", 00:36:28.265 "config": [] 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "subsystem": "accel", 00:36:28.265 "config": [ 00:36:28.265 { 00:36:28.265 "method": "accel_set_options", 00:36:28.265 "params": { 00:36:28.265 "small_cache_size": 128, 00:36:28.265 "large_cache_size": 16, 00:36:28.265 "task_count": 2048, 00:36:28.265 "sequence_count": 2048, 00:36:28.265 "buf_count": 2048 00:36:28.265 } 00:36:28.265 } 00:36:28.265 ] 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "subsystem": "bdev", 00:36:28.265 "config": [ 00:36:28.265 { 00:36:28.265 "method": "bdev_set_options", 00:36:28.265 "params": { 00:36:28.265 "bdev_io_pool_size": 65535, 00:36:28.265 "bdev_io_cache_size": 256, 00:36:28.265 "bdev_auto_examine": true, 00:36:28.265 "iobuf_small_cache_size": 128, 00:36:28.265 "iobuf_large_cache_size": 16 00:36:28.265 } 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "method": "bdev_raid_set_options", 00:36:28.265 "params": { 00:36:28.265 "process_window_size_kb": 1024, 00:36:28.265 "process_max_bandwidth_mb_sec": 0 00:36:28.265 } 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "method": "bdev_iscsi_set_options", 00:36:28.265 "params": { 00:36:28.265 "timeout_sec": 30 00:36:28.265 } 00:36:28.265 }, 00:36:28.265 { 00:36:28.265 "method": "bdev_nvme_set_options", 00:36:28.265 "params": { 00:36:28.266 "action_on_timeout": "none", 00:36:28.266 "timeout_us": 0, 00:36:28.266 "timeout_admin_us": 0, 00:36:28.266 "keep_alive_timeout_ms": 10000, 00:36:28.266 "arbitration_burst": 0, 00:36:28.266 "low_priority_weight": 0, 00:36:28.266 "medium_priority_weight": 0, 00:36:28.266 "high_priority_weight": 0, 00:36:28.266 "nvme_adminq_poll_period_us": 10000, 00:36:28.266 "nvme_ioq_poll_period_us": 0, 00:36:28.266 "io_queue_requests": 512, 00:36:28.266 "delay_cmd_submit": true, 00:36:28.266 "transport_retry_count": 4, 00:36:28.266 "bdev_retry_count": 3, 00:36:28.266 "transport_ack_timeout": 0, 00:36:28.266 "ctrlr_loss_timeout_sec": 0, 00:36:28.266 "reconnect_delay_sec": 0, 00:36:28.266 "fast_io_fail_timeout_sec": 0, 00:36:28.266 "disable_auto_failback": false, 00:36:28.266 "generate_uuids": false, 00:36:28.266 "transport_tos": 0, 00:36:28.266 "nvme_error_stat": false, 00:36:28.266 "rdma_srq_size": 0, 00:36:28.266 "io_path_stat": false, 00:36:28.266 "allow_accel_sequence": false, 00:36:28.266 "rdma_max_cq_size": 0, 00:36:28.266 "rdma_cm_event_timeout_ms": 0, 00:36:28.266 "dhchap_digests": [ 00:36:28.266 "sha256", 00:36:28.266 "sha384", 00:36:28.266 "sha512" 00:36:28.266 ], 00:36:28.266 "dhchap_dhgroups": [ 00:36:28.266 "null", 00:36:28.266 "ffdhe2048", 00:36:28.266 "ffdhe3072", 00:36:28.266 "ffdhe4096", 00:36:28.266 "ffdhe6144", 00:36:28.266 "ffdhe8192" 00:36:28.266 ] 00:36:28.266 } 00:36:28.266 }, 00:36:28.266 { 00:36:28.266 "method": "bdev_nvme_attach_controller", 00:36:28.266 "params": { 00:36:28.266 "name": "nvme0", 00:36:28.266 "trtype": "TCP", 00:36:28.266 "adrfam": "IPv4", 00:36:28.266 "traddr": "127.0.0.1", 00:36:28.266 "trsvcid": "4420", 00:36:28.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:28.266 "prchk_reftag": false, 00:36:28.266 "prchk_guard": false, 00:36:28.266 "ctrlr_loss_timeout_sec": 0, 00:36:28.266 "reconnect_delay_sec": 0, 00:36:28.266 "fast_io_fail_timeout_sec": 0, 00:36:28.266 "psk": "key0", 00:36:28.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:28.266 "hdgst": false, 00:36:28.266 "ddgst": false 00:36:28.266 } 00:36:28.266 }, 00:36:28.266 { 00:36:28.266 "method": "bdev_nvme_set_hotplug", 00:36:28.266 "params": { 00:36:28.266 "period_us": 100000, 00:36:28.266 "enable": false 00:36:28.266 } 00:36:28.266 }, 00:36:28.266 { 00:36:28.266 "method": "bdev_wait_for_examine" 00:36:28.266 } 00:36:28.266 ] 00:36:28.266 }, 00:36:28.266 { 00:36:28.266 "subsystem": "nbd", 00:36:28.266 "config": [] 00:36:28.266 } 00:36:28.266 ] 00:36:28.266 }' 00:36:28.266 [2024-07-23 06:32:21.494426] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:36:28.266 [2024-07-23 06:32:21.494498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920796 ] 00:36:28.266 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.266 [2024-07-23 06:32:21.526433] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:28.266 [2024-07-23 06:32:21.556355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.524 [2024-07-23 06:32:21.647500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.524 [2024-07-23 06:32:21.828790] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:29.093 06:32:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:29.093 06:32:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:29.351 06:32:22 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:29.351 06:32:22 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:29.351 06:32:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.609 06:32:22 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:29.609 06:32:22 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:29.609 06:32:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:29.609 06:32:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.609 06:32:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:29.867 06:32:23 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:29.867 06:32:23 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:29.867 06:32:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:29.867 06:32:23 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:30.136 06:32:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:30.136 06:32:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:30.136 06:32:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.u8SzJVEFot /tmp/tmp.Fg524TJVOZ 00:36:30.136 06:32:23 keyring_file -- keyring/file.sh@20 -- # killprocess 1920796 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1920796 ']' 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1920796 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1920796 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1920796' 00:36:30.136 killing process with pid 1920796 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@967 -- # kill 1920796 00:36:30.136 Received shutdown signal, test time was about 1.000000 seconds 00:36:30.136 00:36:30.136 Latency(us) 00:36:30.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.136 =================================================================================================================== 00:36:30.136 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:30.136 06:32:23 keyring_file -- common/autotest_common.sh@972 -- # wait 1920796 00:36:30.396 06:32:23 keyring_file -- keyring/file.sh@21 -- # killprocess 1919364 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1919364 ']' 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1919364 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1919364 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1919364' 00:36:30.396 killing process with pid 1919364 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@967 -- # kill 1919364 00:36:30.396 [2024-07-23 06:32:23.693215] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:30.396 06:32:23 keyring_file -- common/autotest_common.sh@972 -- # wait 1919364 00:36:30.962 00:36:30.962 real 0m14.020s 00:36:30.962 user 0m34.661s 00:36:30.962 sys 0m3.260s 00:36:30.962 06:32:24 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:30.962 06:32:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:30.962 ************************************ 00:36:30.962 END TEST keyring_file 00:36:30.962 ************************************ 00:36:30.962 06:32:24 -- common/autotest_common.sh@1142 -- # return 0 00:36:30.962 06:32:24 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:30.962 06:32:24 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:30.962 06:32:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:30.962 06:32:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:30.962 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:36:30.962 ************************************ 00:36:30.962 START TEST keyring_linux 00:36:30.962 ************************************ 00:36:30.962 06:32:24 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:30.962 * Looking for test storage... 00:36:30.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:30.962 06:32:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:30.962 06:32:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.962 06:32:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:30.962 06:32:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.962 06:32:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.962 06:32:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.962 06:32:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.962 06:32:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.963 06:32:24 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.963 06:32:24 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.963 06:32:24 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.963 06:32:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.963 06:32:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.963 06:32:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.963 06:32:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:30.963 06:32:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:30.963 /tmp/:spdk-test:key0 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:30.963 06:32:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:30.963 06:32:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:30.963 /tmp/:spdk-test:key1 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1921188 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:30.963 06:32:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1921188 00:36:30.963 06:32:24 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1921188 ']' 00:36:30.963 06:32:24 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.963 06:32:24 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:30.963 06:32:24 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.963 06:32:24 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:30.963 06:32:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.963 [2024-07-23 06:32:24.301761] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:36:30.963 [2024-07-23 06:32:24.301846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921188 ] 00:36:31.223 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.223 [2024-07-23 06:32:24.334193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:31.223 [2024-07-23 06:32:24.359679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.223 [2024-07-23 06:32:24.448157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:31.482 06:32:24 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:31.482 06:32:24 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:31.482 06:32:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:31.482 06:32:24 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.483 06:32:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:31.483 [2024-07-23 06:32:24.713345] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:31.483 null0 00:36:31.483 [2024-07-23 06:32:24.745443] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:31.483 [2024-07-23 06:32:24.745922] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:31.483 06:32:24 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.483 06:32:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:31.483 484383041 00:36:31.483 06:32:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:31.483 876942303 00:36:31.483 06:32:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1921215 00:36:31.483 06:32:24 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:31.483 06:32:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1921215 /var/tmp/bperf.sock 00:36:31.483 06:32:24 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1921215 ']' 00:36:31.483 06:32:24 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:31.483 06:32:24 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:31.483 06:32:24 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:31.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:31.483 06:32:24 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:31.483 06:32:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:31.483 [2024-07-23 06:32:24.811712] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:36:31.483 [2024-07-23 06:32:24.811778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921215 ] 00:36:31.741 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.741 [2024-07-23 06:32:24.846059] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:31.741 [2024-07-23 06:32:24.876659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.741 [2024-07-23 06:32:24.967827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.741 06:32:25 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:31.741 06:32:25 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:31.741 06:32:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:31.741 06:32:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:31.999 06:32:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:31.999 06:32:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:32.262 06:32:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:32.262 06:32:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:32.521 [2024-07-23 06:32:25.820736] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:32.785 nvme0n1 00:36:32.785 06:32:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:32.785 06:32:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:32.785 06:32:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:32.785 06:32:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:32.785 06:32:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:32.785 06:32:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.043 06:32:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:33.043 06:32:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:33.043 06:32:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:33.043 06:32:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:33.043 06:32:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.043 06:32:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.043 06:32:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:33.303 06:32:26 keyring_linux -- keyring/linux.sh@25 -- # sn=484383041 00:36:33.303 06:32:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:33.303 06:32:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:33.303 06:32:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 484383041 == \4\8\4\3\8\3\0\4\1 ]] 00:36:33.303 06:32:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 484383041 00:36:33.303 06:32:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:33.303 06:32:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:33.303 Running I/O for 1 seconds... 00:36:34.240 00:36:34.240 Latency(us) 00:36:34.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.240 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:34.240 nvme0n1 : 1.02 3535.29 13.81 0.00 0.00 35814.23 7233.23 45244.11 00:36:34.240 =================================================================================================================== 00:36:34.240 Total : 3535.29 13.81 0.00 0.00 35814.23 7233.23 45244.11 00:36:34.240 0 00:36:34.240 06:32:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:34.240 06:32:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:34.499 06:32:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:34.499 06:32:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:34.499 06:32:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:34.499 06:32:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:34.499 06:32:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.499 06:32:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:34.757 06:32:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:34.757 06:32:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:34.757 06:32:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:34.757 06:32:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:34.757 06:32:28 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:34.757 06:32:28 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:34.757 06:32:28 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:34.757 06:32:28 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:34.757 06:32:28 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:34.757 06:32:28 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:34.757 06:32:28 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:34.757 06:32:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:35.015 [2024-07-23 06:32:28.316849] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:35.015 [2024-07-23 06:32:28.316869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a690 (107): Transport endpoint is not connected 00:36:35.015 [2024-07-23 06:32:28.317863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a690 (9): Bad file descriptor 00:36:35.015 [2024-07-23 06:32:28.318862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:35.015 [2024-07-23 06:32:28.318884] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:35.015 [2024-07-23 06:32:28.318912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:35.015 request: 00:36:35.015 { 00:36:35.015 "name": "nvme0", 00:36:35.015 "trtype": "tcp", 00:36:35.015 "traddr": "127.0.0.1", 00:36:35.015 "adrfam": "ipv4", 00:36:35.015 "trsvcid": "4420", 00:36:35.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:35.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:35.015 "prchk_reftag": false, 00:36:35.015 "prchk_guard": false, 00:36:35.015 "hdgst": false, 00:36:35.015 "ddgst": false, 00:36:35.015 "psk": ":spdk-test:key1", 00:36:35.015 "method": "bdev_nvme_attach_controller", 00:36:35.015 "req_id": 1 00:36:35.015 } 00:36:35.015 Got JSON-RPC error response 00:36:35.015 response: 00:36:35.015 { 00:36:35.015 "code": -5, 00:36:35.015 "message": "Input/output error" 00:36:35.015 } 00:36:35.015 06:32:28 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:35.015 06:32:28 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:35.015 06:32:28 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:35.015 06:32:28 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@33 -- # sn=484383041 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 484383041 00:36:35.015 1 links removed 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@33 -- # sn=876942303 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 876942303 00:36:35.015 1 links removed 00:36:35.015 06:32:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1921215 00:36:35.015 06:32:28 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1921215 ']' 00:36:35.015 06:32:28 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1921215 00:36:35.015 06:32:28 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:35.015 06:32:28 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1921215 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1921215' 00:36:35.273 killing process with pid 1921215 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@967 -- # kill 1921215 00:36:35.273 Received shutdown signal, test time was about 1.000000 seconds 00:36:35.273 00:36:35.273 Latency(us) 00:36:35.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.273 =================================================================================================================== 00:36:35.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@972 -- # wait 1921215 00:36:35.273 06:32:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1921188 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1921188 ']' 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1921188 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:35.273 06:32:28 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1921188 00:36:35.532 06:32:28 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:35.532 06:32:28 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:35.532 06:32:28 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1921188' 00:36:35.532 killing process with pid 1921188 00:36:35.532 06:32:28 keyring_linux -- common/autotest_common.sh@967 -- # kill 1921188 00:36:35.532 06:32:28 keyring_linux -- common/autotest_common.sh@972 -- # wait 1921188 00:36:35.790 00:36:35.790 real 0m4.917s 00:36:35.790 user 0m9.146s 00:36:35.790 sys 0m1.523s 00:36:35.790 06:32:29 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:35.790 06:32:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:35.790 ************************************ 00:36:35.790 END TEST keyring_linux 00:36:35.790 ************************************ 00:36:35.790 06:32:29 -- common/autotest_common.sh@1142 -- # return 0 00:36:35.790 06:32:29 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:35.790 06:32:29 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:35.790 06:32:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:35.790 06:32:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:35.790 06:32:29 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:35.790 06:32:29 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:35.790 06:32:29 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:35.790 06:32:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:35.790 06:32:29 -- common/autotest_common.sh@10 -- # set +x 00:36:35.790 06:32:29 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:35.790 06:32:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:35.790 06:32:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:35.790 06:32:29 -- common/autotest_common.sh@10 -- # set +x 00:36:37.692 INFO: APP EXITING 00:36:37.692 INFO: killing all VMs 00:36:37.692 INFO: killing vhost app 00:36:37.692 INFO: EXIT DONE 00:36:38.627 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:38.627 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:38.628 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:38.628 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:38.628 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:38.628 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:38.628 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:38.628 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:38.628 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:38.887 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:38.887 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:38.887 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:38.887 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:38.887 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:38.887 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:38.887 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:38.887 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:39.823 Cleaning 00:36:39.823 Removing: /var/run/dpdk/spdk0/config 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:40.082 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:40.082 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:40.082 Removing: /var/run/dpdk/spdk1/config 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:40.082 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:40.082 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:40.082 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:40.082 Removing: /var/run/dpdk/spdk2/config 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:40.082 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:40.082 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:40.082 Removing: /var/run/dpdk/spdk3/config 00:36:40.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:40.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:40.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:40.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:40.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:40.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:40.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:40.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:40.083 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:40.083 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:40.083 Removing: /var/run/dpdk/spdk4/config 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:40.083 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:40.083 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:40.083 Removing: /dev/shm/bdev_svc_trace.1 00:36:40.083 Removing: /dev/shm/nvmf_trace.0 00:36:40.083 Removing: /dev/shm/spdk_tgt_trace.pid1601184 00:36:40.083 Removing: /var/run/dpdk/spdk0 00:36:40.083 Removing: /var/run/dpdk/spdk1 00:36:40.083 Removing: /var/run/dpdk/spdk2 00:36:40.083 Removing: /var/run/dpdk/spdk3 00:36:40.083 Removing: /var/run/dpdk/spdk4 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1599634 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1600365 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1601184 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1601614 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1602301 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1602442 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1603160 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1603209 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1603412 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1604719 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1605520 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1605817 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1606014 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1606220 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1606407 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1606565 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1606723 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1606960 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1607292 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1609570 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1609853 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1610014 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1610023 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1610455 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1610458 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1610889 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1610900 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1611185 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1611198 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1611362 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1611371 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1611861 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1612014 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1612210 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1612382 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1612404 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1612594 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1612747 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1612908 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1613180 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1613333 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1613492 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1613758 00:36:40.083 Removing: /var/run/dpdk/spdk_pid1613927 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1614079 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1614232 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1614504 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1614667 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1614824 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1615064 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1615252 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1615412 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1615563 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1615838 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1616004 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1616162 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1616431 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1616508 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1616714 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1618775 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1621280 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1628882 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1629406 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1631790 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1632071 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1634574 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1638244 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1640340 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1646633 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1651958 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1653157 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1653828 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1664649 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1666931 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1720221 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1723597 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1727432 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1731266 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1731270 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1731921 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1732463 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1733119 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1733514 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1733518 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1733776 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1733873 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1733919 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1734466 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1735105 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1735766 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1736161 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1736163 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1736425 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1737309 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1738020 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1743359 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1768616 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1771399 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1772690 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1774508 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1774529 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1774671 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1774804 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1775235 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1776436 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1777152 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1777482 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1779189 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1779491 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1780050 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1782438 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1785692 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1789218 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1812595 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1815342 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1819116 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1820053 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1821141 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1823714 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1825944 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1830147 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1830150 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1832918 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1833054 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1833184 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1833476 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1833581 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1835181 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1836454 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1837631 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1838805 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1839982 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1841160 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1844973 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1845307 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1846698 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1847435 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1851024 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1852993 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1856298 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1859720 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1866675 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1871015 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1871017 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1883210 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1883622 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1884024 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1884433 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1885007 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1885414 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1885822 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1886346 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1888721 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1888908 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1892644 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1892807 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1894411 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1899951 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1899956 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1902848 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1904240 00:36:40.342 Removing: /var/run/dpdk/spdk_pid1905636 00:36:40.600 Removing: /var/run/dpdk/spdk_pid1906382 00:36:40.600 Removing: /var/run/dpdk/spdk_pid1907784 00:36:40.600 Removing: /var/run/dpdk/spdk_pid1908653 00:36:40.600 Removing: /var/run/dpdk/spdk_pid1913930 00:36:40.600 Removing: /var/run/dpdk/spdk_pid1914307 00:36:40.600 Removing: /var/run/dpdk/spdk_pid1914698 00:36:40.600 Removing: /var/run/dpdk/spdk_pid1916249 00:36:40.601 Removing: /var/run/dpdk/spdk_pid1916644 00:36:40.601 Removing: /var/run/dpdk/spdk_pid1916923 00:36:40.601 Removing: /var/run/dpdk/spdk_pid1919364 00:36:40.601 Removing: /var/run/dpdk/spdk_pid1919376 00:36:40.601 Removing: /var/run/dpdk/spdk_pid1920796 00:36:40.601 Removing: /var/run/dpdk/spdk_pid1921188 00:36:40.601 Removing: /var/run/dpdk/spdk_pid1921215 00:36:40.601 Clean 00:36:40.601 06:32:33 -- common/autotest_common.sh@1451 -- # return 0 00:36:40.601 06:32:33 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:40.601 06:32:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:40.601 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:36:40.601 06:32:33 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:40.601 06:32:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:40.601 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:36:40.601 06:32:33 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:40.601 06:32:33 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:40.601 06:32:33 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:40.601 06:32:33 -- spdk/autotest.sh@391 -- # hash lcov 00:36:40.601 06:32:33 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:40.601 06:32:33 -- spdk/autotest.sh@393 -- # hostname 00:36:40.601 06:32:33 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:40.859 geninfo: WARNING: invalid characters removed from testname! 00:37:19.591 06:33:08 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:19.591 06:33:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:22.133 06:33:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:25.431 06:33:18 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:27.974 06:33:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:31.272 06:33:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:33.811 06:33:27 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:33.811 06:33:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:33.811 06:33:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:33.811 06:33:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:33.811 06:33:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:33.811 06:33:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.811 06:33:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.811 06:33:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.811 06:33:27 -- paths/export.sh@5 -- $ export PATH 00:37:33.811 06:33:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.811 06:33:27 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:33.811 06:33:27 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:33.811 06:33:27 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721709207.XXXXXX 00:37:33.811 06:33:27 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721709207.ZUEUjC 00:37:33.811 06:33:27 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:33.811 06:33:27 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:37:33.811 06:33:27 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:33.811 06:33:27 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:33.811 06:33:27 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:33.811 06:33:27 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:33.811 06:33:27 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:33.811 06:33:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:33.811 06:33:27 -- common/autotest_common.sh@10 -- $ set +x 00:37:33.811 06:33:27 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:33.811 06:33:27 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:33.811 06:33:27 -- pm/common@17 -- $ local monitor 00:37:33.812 06:33:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:33.812 06:33:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:33.812 06:33:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:33.812 06:33:27 -- pm/common@21 -- $ date +%s 00:37:33.812 06:33:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:33.812 06:33:27 -- pm/common@21 -- $ date +%s 00:37:33.812 06:33:27 -- pm/common@25 -- $ sleep 1 00:37:33.812 06:33:27 -- pm/common@21 -- $ date +%s 00:37:33.812 06:33:27 -- pm/common@21 -- $ date +%s 00:37:33.812 06:33:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721709207 00:37:33.812 06:33:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721709207 00:37:33.812 06:33:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721709207 00:37:33.812 06:33:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721709207 00:37:33.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721709207_collect-vmstat.pm.log 00:37:33.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721709207_collect-cpu-load.pm.log 00:37:33.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721709207_collect-cpu-temp.pm.log 00:37:33.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721709207_collect-bmc-pm.bmc.pm.log 00:37:35.194 06:33:28 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:35.194 06:33:28 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:35.194 06:33:28 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:35.194 06:33:28 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:35.194 06:33:28 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:35.194 06:33:28 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:35.194 06:33:28 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:35.194 06:33:28 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:35.194 06:33:28 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:35.194 06:33:28 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:35.194 06:33:28 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:35.194 06:33:28 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:35.194 06:33:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:35.194 06:33:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:35.194 06:33:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:35.194 06:33:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:35.194 06:33:28 -- pm/common@44 -- $ pid=1933076 00:37:35.194 06:33:28 -- pm/common@50 -- $ kill -TERM 1933076 00:37:35.194 06:33:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:35.194 06:33:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:35.194 06:33:28 -- pm/common@44 -- $ pid=1933078 00:37:35.194 06:33:28 -- pm/common@50 -- $ kill -TERM 1933078 00:37:35.194 06:33:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:35.194 06:33:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:35.194 06:33:28 -- pm/common@44 -- $ pid=1933080 00:37:35.194 06:33:28 -- pm/common@50 -- $ kill -TERM 1933080 00:37:35.194 06:33:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:35.194 06:33:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:35.194 06:33:28 -- pm/common@44 -- $ pid=1933111 00:37:35.194 06:33:28 -- pm/common@50 -- $ sudo -E kill -TERM 1933111 00:37:35.194 + [[ -n 1500191 ]] 00:37:35.194 + sudo kill 1500191 00:37:35.205 [Pipeline] } 00:37:35.224 [Pipeline] // stage 00:37:35.230 [Pipeline] } 00:37:35.248 [Pipeline] // timeout 00:37:35.253 [Pipeline] } 00:37:35.271 [Pipeline] // catchError 00:37:35.277 [Pipeline] } 00:37:35.296 [Pipeline] // wrap 00:37:35.304 [Pipeline] } 00:37:35.320 [Pipeline] // catchError 00:37:35.331 [Pipeline] stage 00:37:35.333 [Pipeline] { (Epilogue) 00:37:35.349 [Pipeline] catchError 00:37:35.351 [Pipeline] { 00:37:35.366 [Pipeline] echo 00:37:35.368 Cleanup processes 00:37:35.374 [Pipeline] sh 00:37:35.662 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:35.662 1933233 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:35.662 1933341 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:35.677 [Pipeline] sh 00:37:35.984 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:35.984 ++ grep -v 'sudo pgrep' 00:37:35.984 ++ awk '{print $1}' 00:37:35.984 + sudo kill -9 1933233 00:37:35.997 [Pipeline] sh 00:37:36.282 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:46.288 [Pipeline] sh 00:37:46.575 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:46.576 Artifacts sizes are good 00:37:46.591 [Pipeline] archiveArtifacts 00:37:46.599 Archiving artifacts 00:37:46.820 [Pipeline] sh 00:37:47.106 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:47.122 [Pipeline] cleanWs 00:37:47.133 [WS-CLEANUP] Deleting project workspace... 00:37:47.133 [WS-CLEANUP] Deferred wipeout is used... 00:37:47.141 [WS-CLEANUP] done 00:37:47.143 [Pipeline] } 00:37:47.164 [Pipeline] // catchError 00:37:47.177 [Pipeline] sh 00:37:47.459 + logger -p user.info -t JENKINS-CI 00:37:47.468 [Pipeline] } 00:37:47.484 [Pipeline] // stage 00:37:47.490 [Pipeline] } 00:37:47.506 [Pipeline] // node 00:37:47.512 [Pipeline] End of Pipeline 00:37:47.551 Finished: SUCCESS